Ask a Question
-
Using SEOmoz for the first time, the initial crawl said we have 9,00 errors which were basically 4,500 duplicate pages and 4,500 dupllicate page titles. (ie http://domainname/etc .html, and http://www.domainmname/etc .html
We altered our website accordingly by changing all internal links to http://www.domainmname/etc .html as Google and all other rngines are listing us using the www. prefix.On the next crawl we now have even more of these duplicate errors. How d we go about removing them as we only have one file for each on the server.
Google has down graded our website in April by 35% and ass this is a retail site we are losing a lot of business.
I would very much appreciate it if anyone has the time to amswer.
Howard
-
Thx Alsvik,
Yes I have just done this. We had originally set this up with Google but somehow the site owner verification for the url got removed and Google had reverted to No preference. Hopefully this will sort the problem
-
Have you changed the preffered domain suffix in GWT? If you exclude one of the two (domain.com or www.domain.com) google will not register duplicate pages/content on ie /etc And add a redirect from domain.com to www.domain.com That should fix the issue
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Forced Redirects/HTTP<>HTTPS 301 Question
Hi All, Sorry for what's about to be a long-ish question, but tl;dr: Has anyone else had experience with a 301 redirect at the server level between HTTP and HTTPS versions of a site in order to maintain accurate social media share counts? This is new to me and I'm wondering how common it is. I'm having issues with this forced redirect between HTTP/HTTPS as outlined below and am struggling to find any information that will help me to troubleshoot this or better understand the situation. If anyone has any recommendations for things to try or sources to read up on, I'd appreciate it. I'm especially concerned about any issues that this may be causing at the SEO level and the known-unknowns. A magazine I work for recently relaunched after switching platforms from Atavist to Newspack (which is run via WordPress). Since then, we've been having some issues with 301s, but they relate to new stories that are native to our new platform/CMS and have had zero URL changes. We've always used HTTPS. Basically, the preview for any post we make linking to the new site, including these new (non-migrated pages) on Facebook previews as a 301 in the title and with no image. This also overrides the social media metadata we set through Yoast Premium. I ran some of the links through the Facebook debugger and it appears that Facebook is reading these links to our site (using https) as redirects to http that then redirect to https. I was told by our tech support person on Newspack's team that this is intentional, so that Facebook will maintain accurate share counts versus separate share counts for http/https, however this forced redirect seems to be failing if we can't post our links with any metadata. (The only way to reliably fix is by adding a query parameter to each URL which, obviously, still gives us inaccurate share counts.) This is the first time I've encountered this intentional redirect thing and I've asked a few times for more information about how it's set up just for my own edification, but all I can get is that it’s something managed at the server level and is designed to prevent separate share counts for HTTP and HTTPS. Has anyone encountered this method before, and can anyone either explain it to me or point me in the direction of a resource where I can learn more about how it's configured as well as the pros and cons? I'm especially concerned about our SEO with this and how this may impact the way search engines read our site. So far, nothing's come up on scans, but I'd like to stay one step ahead of this. Thanks in advance!
Technical SEO | | ogiovetti0 -
I am trying to generate GEO meta tag for my website where on one page there are multiple locations My question is, Can I add GEO tagging for every address?
Am I restricted to 1 geo tag per page or can i add multiple geo tags ?
Technical SEO | | lina_digital0 -
Questionable Referral Traffic
Hey SEOMozers, I'm working with a client that has a suspicious traffic pattern going on. In October, a referral domain called profitclicking.com started passing visits to the site. Almost, in parallel the overall visits decreased anywhere from 35 to 50%. After checking out profitclicking.com more, it promises more traffic "with no SEO knowledge". The client doesn't think that this service was signed up for internally. Regardless, it obviously smells pretty fishy, and I'm searching for a way I can disallow traffic from this site. Could I simply just write a simple disallow statement in the robots.txt and be done with it? Just wanted to see if anyone else had any other ideas before recommending a solution. Thanks!
Technical SEO | | kylehungate0 -
Questions about root domain setup
Hi There, I'm a recent addition to SEOmoz and over the past few weeks I've been trying to figure things out. This whole SEO process has been a bit of a brain burner but its slowly becoming a little more clearer. For awhile I noticed that I was unable to get Open Site Explorer to display information about my site. It mentioned that that there was not enough data for the URL. Too recent of a site, no links, etc. Eventually I changed the the URL to include "www." and it pulled up results. I also noticed that a few of my page warnings are because of duplicate page content. One page will be listed as http://enbphotos.com. The other will be listed as http://www.enbphotos.com. I guess I'm not sure what this all means and how to change it. I'm also not really sure what the terminology even is and something regarding root domain seemed appropriate but I'm not sure if it is accurate. Any help/suggestions/links would be appreciated! Thanks, Chris
Technical SEO | | enbphotos0 -
Wordpress question
I was curious when i run an OSE report on certain websites and their name.wordpress.com shows up with a PA of whatever and a DA of 100. But when I created my wordpress site and post on it, it only has a PA and DA of 1. is this because SEOmoz has not indexed it yet? It is a month old. http://shiftinsurance.wordpress.com/ Can anyone help pls?
Technical SEO | | greasy0 -
Sub-domains for keyword targeting? (specific example question)
Hey everyone, I have a question I believe is interesting and may help others as well. Our competitor heavily (over 100-200) uses sub-domains to rank in the search engines... and is doing quite well. What's strange, however, is that all of these sub-domains are just archives -- they're 100% duplicate content! An example can be seen here where they just have a bunch of relevant posts archived with excerpts. How is this ranking so well? Many of them are top 5 for keywords in the 100k+ range. In fact their #1 source of traffic is SEO for many of the pages. As an added question: is this effective if you were to actually have a quality/non-duplicate page? Thanks! Loving this community.
Technical SEO | | naturalsociety0 -
Video question
If another company hosts our videos, but they are only found embedded on our site, do we get all of the SEO benefits from the video, or would we have to host it for that to happen?
Technical SEO | | ClaytonKendall0