Metadata and duplicate content issues
-
Hi there: I'm seeing a steady decline in organic traffic, but at the same time and increase in pageviews and direct traffic. My site has about 3,000 crawl errors!! Errors are duplicate content, missing description tags, and description too long. Most of these issues are related to events that are being imported from Google calendars via ical and the pages created from these events. Should we block calendar events from being crawled by using the disallow directive in the robots.txt file? Here's the site: https://www.landmarkschool.org/
-
Yes, of course you can keep running the calendar .
But you have to keep in mind somes pages will still appear in search results even when you has deleted those URL.
You can watch this video
Matt Cutts explains why a page that is disallowed in robots.txt may still appear in Google's search results.On that case just to make sure, you can implement a 301 redirection.
This is going to be your second line defense. Just redirect all of those URLs to your home page.
There are many option to make a redirection. In my I'm case wordpress user so, whit a simple plugin I can resolve the problem in 5 minutes, in your case I have been checking your website and I have no idea which cms you are using.
Anyway you can use this app 301 Redirect Code Generator with many option available
PHP, JS, ASP, ASP.NET and of course APACHE (htaccess)Now is the right moment to use the list that I mentioned in my first answer.
(2 - Create a list of all url that you want disable)**So lets talk about your second question. **
Of course it will hurt your ranking, if you have 3020 index pages on google but just 20 of those pages are useful for the users you have a big problem.A website should address any question or concern that a current or potential customer or client may have. If it doesn’t, the website is essentially useless.
with a simple divison 20 / 3020= 0.00625 less that 1% of your site is useful. So Im pretty sure that your rank has ben affected.
Dont forget mark my answer as a "GOOD ANSWER" that will make me happy, and good luck.
-
Hi Roman: Thanks so much for your prompt reply. I agree that using robots.txt is the way to go. I do not want to disable the google calendar sync (we're a school and need our events to feed from several google calendars). I want to confirm that the robots.txt option will still work if the calendars are still syncing with the site.
One more question--do you think that all these errors are causing the dip in organic traffic?
-
SOLUTION
1 - You have to disable the google calendar sync with your website
2 - Create a list of all url that you want disable
3 - At this point you have multiples option to block those URLs that you want to exclude from search engines.So first lets define your problem
By blocking a URL on your site, you can stop Google from indexing that web page for display in Google Search results. In other words, people looking through Google Search results can't see or navigate to a blocked URL or its content.
If you have pages or other content that you don't want to appear in Google Search results, you can do this using a number of options:
- robots.txt files (Best Option)
- meta tags
- password-protection of web server files
In your case the option 2 will take a lot of time, why? beacuse you will have to manually add the "noindex" meta tag to each page, one by one....no make sense and the option 3 requires some server configurations and for me are little bit complex and time consuming at leats in my case, I would have to research on google, see some videos on Youtube and see what happen.
So firts option is the winner for me ....let see some example of how your robot.txt should look like.
- The following example "/robots.txt" file specifies that no robots should visit any URL starting with "/events/january/" or "/tmp/", or /calendar.html:
<------------------------------START HERE------------------------------>
robots.txt for https://www.landmarkschool.org/
User-agent: *
Disallow: /events/january/ # This is an infinite virtual URL space
Disallow: /tmp/ # these will soon disappear
Disallow: /calendar.html
<------------------------------END HERE------------------------------>FOR MORE INFO SEE THE VIDEO > https://www.youtube.com/watch?v=40hlRN0paks
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
641 Crawl Errors In My Moz Report - 190 are high priority Duplicate Content
Hi everyone, There are high and medium level errors. I was surprised to see any especially since Google Analytics shows no errors whatsoever.190 errors - duplicate content.A lot of images are showing in the Moz Crawl Report as errors, and when I click on one of these links in the report, it directs to the image which displays on a blog post on the site unusually since I haven't started blogging yet.. So it looks like all those errors are because the images are appearing on their own post.So for example a picture of a mountain would be referred to with www.domain.com/mountains ; the image would be included in the content on a page but why give an image a page/post all of it's own when that was not my intention. Is there a way I can change this?# ----------------------------------------
Reporting & Analytics | | SEOguy1
These are things I first see at the top of the Moz Report: There are 2 similar home urls at the top of the report: http status code is 200 for both (1) and (2) Link Count for (1) is 71. Link count for (2) is 60. No client or server errors Rel Canonical Rel-Canonical Target
Yes http:// domain. co.uk/home
Yes http:// domain. co.uk/home/ Does this mean that the home page is being seen as a duplicate by Google and the search engines?http status codes on every page is 200.Your help would be appreciated.Best Regards,0 -
Referral issue in Google analytics
We have an eCommerce website that counts paypal as a referral source in Analytics. The site takes people to Paypal to make a payment and then back to the website to a Thank You page once that payment has been made. Due to this, Analytics sees this as a conversion that has come from Paypal, and also records it as a referral source, when we know this is not really the case. This also distorts the data in analytics and prohibits us from clearly seeing which channels sales have come from. Is there anyway in Analytics to include Paypal as a part of the website so that it does not record Paypal as a separate referral website?
Reporting & Analytics | | Gavo0 -
On Google Analytics, Pages that were 301 redirected are still being crawled. What's the issue here?
URL that we redirected are being crawled on Google Analytics. Since they dont exist, they have high bounce rates. What can the issue be?
Reporting & Analytics | | prestigeluxuryrentals.com0 -
Google Analytics is treating my blog like all the content is just on the home page.
Hello all, I installed Google Analytics on a main website and a blog (blog.travelexinsurance.com) While it appears to be tracking correctly (and when I test it in real time it shows that I'm visiting) but it is treating the entire blog as though it's one page. So I can't see data on blog post X. All I see is that X visitors came to my blog in aggregate. So I see blog.travelex.com has 999 visitors, but it doesn't show that /travel-luggage got 50 visits, while /insurace-tips got 75 and so forth. I assume I screwed up the tracking somehow, but can't figure out where I went wrong. Tracking on the main domain works just fine. It's specific to the blog.
Reporting & Analytics | | Patrick_G0 -
Will the analytics offer me the same content review power as Hootsuite or Twitonomy?
Will the analytics offer me the same content review power as Hootsuite or Twitonomy? I need to know if I should get one of these tools in addition to this. I need information weekly about retweets, mentions, most popular content, reach, etc.,? I would like to be able to do this all from one place- here in Moz.
Reporting & Analytics | | Isaac55890 -
How can you tell if your new content has been indexed?
Other than simply doing a search in each case, is there any way I can tell (in Webmaster Tools, for example) if the 500-1000 new pages of content I have added have been indexed and are now appearing in search results? My traffic hasn't risen much, but I know at least a few of them are in there... How can I tell when they're all in?
Reporting & Analytics | | corp08030 -
Google Analytic Tracking Issue (&utm_nooverride=1)
Hello, We have a problem that means we are unable to track our AdWords and organic work at all. Looking at "/All Traffic Sources" and clicking on "Ecommerce Tab" in Analytics we can see that (made up ratio :)):
Reporting & Analytics | | jannkuzel
£2 is attributed to Google/ CPC
£1 is attributed to Google / Organic
But £100 to Payment Provider/ referral and also various referrals from banking transaction pages. All of the revenue/conversions are being credited to the payment provider or the bank security checks the payment goes through. After having done some research we have found that the problem may be that Google Analytics attributes the purchase to the most recent click (on the payment provider button) rather than the initial click on the cpc campaign/organic or direct etc. Some people have suggested using the "&utm_nooverride=1"
tag which we wanted to run past you guys and confirm whether adding
this tag to the payment provider 'buy now' button on our website will
presumably fix this referral problem? Alternatively does the tag need
to be entered into our CPC campaigns as well? Or can you please guide
us in another way? We have also heard that "cross-domain" tracking could be the solution. So we are really confused what to do and where hoping someone had maybe been through something similar and could advice before we fully launch into a solution. In addition, it should be noted that our 'Goals Funnel Visualisation'
of 'checkout' breaks up at the penultimate stage of the checkout. All
customers exit through the /checkout_process (penultimate) but are recognised returning to the successful checkout page but there is a missing link in between these
two stages as 0% pass through is shown even though they do return? Thank you so much in advance for all your help.0 -
Regex in Analytics for longtail content
I have spent quit some time now, trying to figure this out. I wannt to segment landingoage traffic to the longtail content. That's content in the 3rd or 4th subdirectorylevel. I would like to do that by saying google: "Just show me traffic that arrives on landingpages with 3 OR 4 slashes "/" But I can't get a solution, can anybody help in the seoMOZ community?
Reporting & Analytics | | viventuraSEO0