Dot Net Nuke generating long URL showing up as crawl errors!
-
Since early July a DotNetNuke site is generating long urls that are showing in campaigns as crawl errors: long url, duplicate content, duplicate page title.
URL: http://www.wakefieldpetvet.com/Home/tabid/223/ctl/SendPassword/Default.aspx?returnurl=%2F
Is this a problem with DNN or a nuance to be ignored? Can it be controlled?
Google webmaster tools shows no crawl errors like this.
-
If at all possible use IIS URL Rewriter so that you can canonicalize your site structure.
http://www.iis.net/downloads/microsoft/url-rewrite
If you have a smaller site, and it's in the early stages I'd try using other CMSs and software to see if you can find something that you like. I love Wordpress, and I use it for everything.
-
Are all of the problematic URLs variations of the SendPassword page? Otherwise, if you are getting long URLs for other pages, just make sure you are using rel=canonical tags to tell the search engines the correct URL to use for each page.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Moz Crawl Report more urls?
Hi. I have used Moz Crawl Test and get my 3,000 urls crawled no issue. However, my site has more than that, is it possible to crawl the entire website? Alot of the crawl urls in the Moz test are search string urls and filters so Ive probably wasted about 2,500 urls on filter urls. Any advise or alternative software that wont cost a fortune?
Moz Pro | | YNWA
Thanks0 -
Rogerbot does not catch all existing 4XX Errors
Hi I experienced that Rogerbot after a new Crawl presents me new 4XX Errors, so why doesn't he tell me all at once? I have a small static site and had 9 crawls ago 10 4XX Errors, so I tried to fix them all.
Moz Pro | | inlinear
The next crawl Rogerbot fount still 5 Errors so I thought that I did not fix them all... but this happened now many times so that I checked before the latest crawl if I really fixed all the errors 101%. Today, although I really corrected 5 Errors, Rogerbot digs out 2 "new" Errors. So does Rogerbot not catch all the errors that have been on my site many weeks before? Pls see the screenshot how I was chasing the errors 😉 404.png0 -
How can I prevent errors of duplicate page content generated by my tags from my wordpress on-site blog platform?
When I add meta data and a canonical reference to my blog tags for my on-site blog which works using a wordpress.org template, Roger generates errors of duplicate content. How can I avoid this problem? I want to use up to 5 tags per post, with the same canonical reference and each campaign scan generates errors/warnings for me!
Moz Pro | | ZoeAlexander0 -
How long would a SEOMoz crawl usually take for a site with around 4000 pages?
We are working through optimising a site for one of our clients and the SEOMoz crawl progress says it has been running since the 8th Feburary. It's now almost a week later and it still hasn't finished. The first run took a few days, is there any way of restarting the process?
Moz Pro | | TJSSEO0 -
Pages Crawled: 250 | Limit: 250
One of my campaigns says: Pages Crawled: 250 | Limit: 250 Is this because it's new and the limit will go up to 10,000 after the crawl is complete? I have a pro account, 4 other campaigns running and should be allowed 50,000 pages in total
Moz Pro | | MirandaP0 -
Duplicate page error from SEOmoz
SEOmoz's Crawl Diagnostics is complaining about a duplicate page error. I'm trying to use a rel=canonical but maybe I'm not doing it right. This page is the original, definitive version of the content: https://www.borntosell.com/covered-call-newsletter/sent-2011-10-01 This page is an alias that points to it (each month the alias is changed to point to the then current issue): https://www.borntosell.com/covered-call-newsletter/latest-issue The alias page above contains this tag (which is also updated each month when a new issue comes out) in the section: Is that not correct? Is the https (vs http) messing something up? Thanks!
Moz Pro | | scanlin0 -
Long tail software
I am trying to optimise on line shop. At begining I was focused on small number of keywords, but now I am considering using 'long tail' technique. Is there any software supporting/helping with 'long tail' seo, as it seems impossible to optimise for all of those pharses? How should link to subpages to get best results? Any other information regarding 'long tail' seo would be much appreciated. Best Regards
Moz Pro | | g_stepinski0