Removing duplicate &var=1 etc var name urls from google
-
Hi I had a huge drop in traffic around the 11th of july over 50% down with no recovery as yet... ~5000 organic visits per day down to barley over 2500.
I fixed up a problem that one script was introducing that had caused high bounce rates.
Now i have identified that google has indexed the entire news section 4 times, same content but with var=0 var=1 2 3 etc around 40,000 urls in total.
Now this would have to be causing problems.
I have fixed the problem and those url's 404 now, no need for 301's as they are not linked to from anywhere.
How can I get them out of the index? I cant do it one by one with the url removal request.. I cant remove a directory from url removal tool as the reuglar content is still there..
If I ban it in robots.txt those urls, wont it never try to index them again and thus not ever discover they are 404ing?
These urls are no longer linked to from anywhere, so how can google ever reach them by crawling to find them 404ing?
-
yes
-
Hi thanks, so if it cant find a page and finds no more links to a page. does that mean that it should drop out of the index within a month?
-
The definition of a 404 page is a page which cannot be found. So in that sense, no Google can't find the page.
Google's crawlers follow links. If there is not a link to the page, then there is no issue. If Google locates a link, they will attempt to follow that link.
-
Hi Thanks, so if a page is 404'ing but not linked to from anywhere google will still find it?
-
Hi Adam.
The preferred method to handle this issue would have been to only offer one version of the URL. Once you realized the other versions were active, you have a couple options to deal with the problem:
Use a 301 to redirect all the versions of the page to the main URL. This method would have allowed your existing Google links to work. Users would still find the correct page. Google would have noticed the 301 and adjusted their links.
Another option to consider IF the pages were helpful would be to keep them and use the canonical tag to indicate the URL of the primary page. This method would offer the same advantages mentioned above.
By removing the pages and allowing them to 404, everyone loses for the next month. Users who click on a search result will be taken to a 404 page rather then finding the content they seek. Google wont be offering the search results users are seeking. You will experience a high bounce rate as many users do not like 404 pages, and it will take a month for an average site to be fully crawled and the issue corrected.
If you block the pages in robots.txt, then Google wont attempt to crawl the links. In general, your robots.txt should not be used in this manner.
My recommendation is to fix this issue either with the proper 301s. If that is not an option, be sure your 404 page is helpful and as user friendly as possible. Include a site search option along with your main navigation. Google will crawl a small percent of your site each day. You will notice the number of 404 links diminish over time.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate, submitted URL not selected as canonical
Hi all, A number of our pages have dropped out of search rankings. It seems they are being marked as "Duplicate, submitted URL not selected as canonical" However, the page Google is choosing as the canonical is totally different - different headings, titles, metadata, content on the page. We are completely mystified as to why this is happening. If anyone can shed any light, it would be hugely appreciated! Example URL is this one:
Technical SEO | | Eric_S
https://www.vouchedfor.co.uk/IFA-financial-advisor-mortgage/london Which Google seems to think is a duplicate of this: https://www.vouchedfor.co.uk/solicitor/london0 -
Exact Match Domain & Title Tag / URL
I currently own an exact match domain for my keyword. I have it set up with multiple pages and also a blog. The home page essentially serves as a hub and contains links to all the pages and the blog. My targeted keyword is on its own page and I made the title tag the same as my keyword. As an example the URL for my targeted post looks like this: benefitsofrunningshoes.com/benefits-of-running-shoes I have solid, non-spammy content and clean whitehat earned backlinks directing to that specific page. My concern right now is that the URL looks kinda spammy. The website has been live for about a week and the home page ranks well enough but my targeted page is no where to be found. (it does show up if I manually search via search command "site:benefitsofrunningshoes.com"). I'm wondering if it is acceptable to use the exact keyword in title tag / page url if it is also in the domain as an EMD? Should I change the title tag and leave the URL in? Or should I completely change the title tag and URL and 301 redirect to the new page? I appreciate any help!
Technical SEO | | Kusanagi170 -
Updating content on URL or new URL
High Mozzers, We are an event organisation. Every year we produce like 350 events. All the events are on our website. A lot of these events are held every year. So i have an URL like www.domainname.nl/eventname So what would you do. This URL has some inbound links, some social mentions and so on. SO if the event will be held again in 2013. Would it be better to update the content on this URL or create a new one. I would keep this URL and update it because of the linkvalue and it is allready indexed and ranking for the desired keyword for that event. Cheers, Ruud
Technical SEO | | RuudHeijnen0 -
A week ago I asked how to remove duplicate files and duplicate titles
Three weeks ago we had a very large number of site errors revealed by crawl diagostics. These errors related purely to the presence of both http://domain name and http://www.domain name. We used the rel canonical tag in the head of our index page to direct all to the www. preference, and we have no improvement. Matters got worse two weeks ago and I checked with Google Webmaster and found that Google had somehow lost our preference choice. A week ago I asked how to overcome this problem and received good advice about how to re-enter our preference for the www.tag with Google. This we did and it was accepted. We aso submitted a new sitemap.xml which was also acceptable to Google. Today, a week later we find that we have even more duplicate content (over 10,000 duplicate errors) showing up in the latest diagnostic crawl. Does anyone have any ideas? (Getting a bit desperate.)
Technical SEO | | FFTCOUK0 -
Websites being hacked & duplicated, what should we do?
Hi, please help! Our website was hacked and being totally duplicated. They even injected codes to intercept our orders. Although the codes issue had been solved, still there're two mirror sites out there. When search for some of our key words, they even have good ranks. What exactly can we do to let Google ban those two sites. Thanks in advance!
Technical SEO | | Squall3150 -
Google Plus 1
How google plus 1 votes can affect your position in google search engine? On my website i have a google plus button, i have just 3 votes, if i buy lets say 100 votes, but slowly, in a month, will this affect my ranking positively?
Technical SEO | | prunarevic0 -
SEOMoz is indicating I have 40 pages with duplicate content, yet it doesn't list the URL's of the pages???
When I look at the Errors and Warnings on my Campaign Overview, I have a lot of "duplicate content" errors. When I view the errors/warnings SEOMoz indicates the number of pages with duplicate content, yet when I go to view them the subsequent page says no pages were found... Any ideas are greatly welcomed! Thanks Marty K.
Technical SEO | | MartinKlausmeier0 -
Why has Google removed meta descriptions from SERPS?
One of my clients' sites has just been redesigned with lots of new URLs added. So the 301 redirections have been put in place and most of the new URLs have now been indexed. BUT Google is still showing all the old URLs in the SERPS and even worse it only displays the title tag. The meta description is not shown, no rich snippet, no text, nothing below the title. This is proving disastrous as visitors are not clicking on a result with no description. I have to assume its got something to do with the redirection, but why is it not showing the descriptions? I've checked the old URLs and he meta description is definitely still in the code, but Google is choosing not to show it. I've never seen this before so I'm struggling for an answer. I'd like to know why or how this is happening, and if it can be resolved. I realise that this may be resolved when Google stops showing all the old URLs but there's no telling how long that will take (can it be speeded up?)
Technical SEO | | Websensejim0