Dealing with a 404
-
Hi there,
I have an error on one of my campaigns. It says that it gets a 404on this page:
http://www.datasat.com/tetra/white-paper.htmlWEhjdAfgkh
However, I cannot replicate the above URL as it doesn't exist on the site. The end of the URL has some spurious characters which I don't know how they got there.
Has anyone any ideas about what's happening and how I can sort it?
Many thanks
-
Hi Anders,
Thanks for that.
Iain
-
Hi Iain.
You could just fix the error on the www.datasat.com/mining/tetra-networks-for-mining.html page. If you'd like, you could also do a 301 from http://www.datasat.com/tetra/white-paper.htmlWEhjdAfgkh to http://www.datasat.com/tetra/white-paper.html, but I don't think there's really any need for that.
-
Hi!
I don't think that is the case in this occasion, as there is actually an A-tag (with just "." as anchor text) at the page www.datasat.com/mining/tetra-networks-for-mining.html pointing towards http://www.datasat.com/tetra/white-paper.htmlWEhjdAfgkh (look in page source at line number 63)
Removing that link would really fix the whole issue...
Anders
-
Hi Thomas,
Thanks for your thorough and very useful answer.
I'm sorry I got a little confused with the Google Images part but essentially are you saying that I should try to place a 301 redirect on the page? It seems to be a very small error. Is there any real downside from me leaving as it is?
Thanks again (and sorry for the stupidity of the follow-up!)
Cheers
Iain
-
This is because of GA & http://www.a1webstats.com/
http://www.a1webstats.com/stats/pt.js
see this linkhttps://builtwith.com/datasat.com
To fix 301 &
http://googlewebmastercentral.blogspot.com/2011/06/supporting-relcanonical-http-headers.html
https://support.google.com/webmasters/answer/139066?hl=en
http://moz.com/blog/how-to-advanced-relcanonical-http-headers
http://moz.com/blog/rel-confused-answers-to-your-rel-canonical-questions
For an example of this type of data you can see, click on the link below:
http://www.a1webstats.com/stats/view-report.aspx?ReportID=9B7D261C-F465-46DC-8751-D4367AC02E11
What this shows you is information that often includes the search phrase typed into Google Images and it always shows you the path that each person took through the website. As you can see from the example link above, a click from a Google Images search can lead to lots of pages being viewed.
Step 3 – Raising your visibility in Google Images
Being realistic, the majority of people that search for something in Google Images, are not going to be potential buyers. They could just be casually interested or need an image for something they’re working on.
What you should be interested in are the small percentage who ARE potentially useful to you.
Take the example of that link http://www.a1webstats.com/stats/view-report.aspx?ReportID=9B7D261C-F465-46DC-8751-D4367AC02E11 – that shows just 7 visitors who went to the website (from Google Images) and looked at a few pages (within a week). There were actually a lot more Google Images visits but we’ve shown just those that are of most interest/value. If that business had only those 7 visitors in a week then the opportunities to convert them to enquiries/business aren’t great. But what if they had 70, 170, 270, more …
Quite simply, the more traffic there is from Google Images, the more potential there is that some of those visitors are going to be potentially useful.
The business given in that example create highly exclusive swimming pool designs that are only affordable to a tiny percentage of the world population. Of the visitors to their website (coming in from Google Images), it would be surprising if even 1% were the target market. Therefore, by raising the visibility within Google Images, the chances of getting useful visitors becomes higher.
If you use the Referrers report within A1WebStats you’ll see how many visits you get (from Google Images searchers) within a period of time (e.g. a month). If the number is quite small (and you see the value on using images to bring traffic to your website) then you need to identify ways to boost your visibility in Google Images (again, ask us how if you’re not sure).
Probably the simplest modification we can make to the tracking code is to modify the "trackPageView" call so that it records a pageview of a page we specify rather than its default behavior of reading the URL from the address bar.
Common reasons for making this change include:
- turning what would normally be events (like button clicks) into goals - only pageviews or virtual pageviews can be set up as goals
- including extra information in the pageview such as information contained in an anchor (e.g. index.html#anchor)
In your page your using
Google Analytics
Google Analytics Usage Statistics - Websites using Google Analytics
Google Analytics offers a host of compelling features and benefits for everyone from senior executives and advertising and marketing professionals to site owners and content developers.
Net-Results
Net-Results Usage Statistics - Websites using Net-Results
Marketing automation software.
Hope this helps,Thomas
-
Hi!
OpenSiteExplorer shows an inbound link from www.datasat.com/mining/tetra-networks-for-mining.html. I looked at the page source, and it contains an empty link towards this particular page right after the correct link to your whitepaper.
Best regards,
Anders -
the letters after the URL's HTML is a tracking code for tracking shared urls
hope that helps,
Thomas
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to deal with rel=canonical when using POST parameters
Hi there,
On-Page Optimization | | mjk26
I currently have a number of URLs throughout my site of the form: https://www.concerthotels.com/venue-hotels/o2-academy-islington-hotels/256133#checkin_4-21-2024&checkout_4-22-2024&rooms_1&guests_2&artistid_15878:256133 This sends the user through to a page showing hotels near the O2 Academy Islington. Once the page loads, my code looks at the parameters specified in the # part of the URL, and uses them to fill in a form, before submitting the form as a POST. This basically reloads the page, but checks the availability of the hotels first, and therefore returns slightly different content to the "canonical" version of this page (which simply lists the hotels before any availability checks done). Until now, I've marked the page that has had availability checks as noindex,follow. But because the form was submitted with POST parameters, the URL looks exactly like the canonical one. So the two URLs are identical, but due to POST parameters, the content is slightly different. Does that make sense? My question is, should both versions of this page be marked as index,follow? Thanks
Mike0 -
How to fix a 404?
I had to delete a page of my site, now when I search for it I see it's showing a 404. How to I get that page to stop showing on Google? Thank you in advance!
On-Page Optimization | | Coppell0 -
Yoast SEO sitemap link 404 problem
I have recently moved my wordpress blog from a subdomain into a directory e.g. www.mysite.com/blog/ and installed yeast SEO however when I go to the site map as directed in the pluign panel www.mysite.com/blog/sitemap_index.xml its not there are I get a 404 error? Any help much appreciated.
On-Page Optimization | | SamCUK0 -
How to deal with duplicate content when presenting event and sub-events information?
Hi, I'm have a sport event calendar website.
On-Page Optimization | | ahotu
It presents events that may have multiple races.
The event has its own page as well as the races. example :
Event: /event/edinburgh-marathon-festival Races:
/race/emf-half-marathon
/race/emf-10-km
/race/edinburgh-marathon
/race/emf-5-km The pages may have a lot of information in common (location, date, description) and they all link to each other.
What would be the best practices to avoid having the pages considered duplicate content by Google? Thanks0 -
Dealing with updating blog posts
I run a travel and culture blog which means that I write about a lot of upcoming events which recur each year. Usually I title (and slug) the page with the event name and date. When it comes to update the article the next year, sometimes it's as little as changing the date, other times more has changed and it needs to be substantially re-written. Until now, what I've done is update the title, content, and then re-posted (sometimes altering the slug where it's needed to be done). Sometimes it works fine and Google keeps me ranking well, but other times the changes dont get such a great response. I have these options (as far as I can see). Which do you think is best? 1. To create a new article each year and put a message at the start of the previous one to say, click here to read about the 2012 event 2. To continue what I'm doing updating, changing the slug, and re-posting (ie changing the date). 3. To write a new article and insert a 301 redirect. I need to make sure the article appears as a new article in my RSS feed and also on the homepage. Look forward to your ideas! Thanks
On-Page Optimization | | ben10000 -
How to handle pages with no information at the moment, but are not 404?
As people who may have seen my past questions know, I run a small website which acts as a business/review directory for local businesses in a specific niche. Right now every business has a page on a url such as: http://site.com/businesses/business-name which shows the top 5 reviews with a link to the full review list which is located at: http://site.com/businesses/business-name/reviews The problem(?) I have is that even for a business with 0 reviews, the latter URL is available and responds with a 200 status code, but ultimately just says "There aren't any" which results in search terms for "business name reviews" often leading to these dead-end pages when I would rather have them land on the business page itself. How is everyone handling URLs? Until the business has has reviews, this URL is useless, but it is a completely valid URL. Some ideas I've had are in order of what I think is best to worst: Return 200, but with a meta 'noindex' tag if the business has no reviews at the time requested Return 404 if the business has no reviews at the time requested Return 302 back to the main business page if the business has no reviews at the time requested Anyone have any better ideas than above for how to handle this situation? One other option is to completely get rid of the full review list and rework the main business profile page, but that would obviously require a lot more development. I'm looking for the best option in the meantime. Thanks in advance for your insight.
On-Page Optimization | | qurve0 -
Will a large percentage of 404 links negatively impact SERP performance?
We discovered a broken link and issue with a dynamically generated sitemap that resulted in 9,000+ pages of duplicate content (namely there was not actual 404 page, but content for a 404 page that populated on the broken page). We've corrected that issue so the 404 page is working correctly now and there aren't any more broken links on the site. However, we just reviewed our Google crawl report, and saw that now there are 9,000+ 404 links in the Google index. We discovered the initial error when our SERP performance dropped 60% in a month. Now that we've corrected all the duplicate content pages, will vast number of 404 pages negatively impact SERP results? If so, do you recommend doing 301 redirects to the page it should have gone to, and do you know of any automated tools performing the 301's (it's a standard HTML site, no CMS involved). Thanks for your help!
On-Page Optimization | | DirectiveGroup0 -
How woud you deal with Blog TAGS & CATEGORY listings that are marked a 'duplicate content' in SEOmoz campaign reports?
We're seeing "Duplicate Content" warnings / errors in some of our clients' sites for blog / event calendar tags and category listings. For example the link to http://www.aavawhistlerhotel.com/news/?category=1098 provides all event listings tagged to the category "Whistler Events". The Meta Title and Meta Description for the "Whistler Events" category is the same as another other category listing. We use Umbraco, a .NET CMS, and we're working on adding some custom programming within Umbraco to develop a unique Meta Title and Meta Description for each page using the tag and/or category and post date in each Meta field to make it more "unique". But my question is .... in the REAL WORLD will taking the time to create this programming really positively impact our overall site performance? I understand that while Google, BING, etc are constantly tweaking their algorithms as of now having duplicate content primarily means that this content won't get indexed and there won't be any really 'fatal' penalties for having this content on our site. If we don't find a way to generate unique Meta Titles and Meta Descriptions we could 'no-follow' these links (for tag and category pages) or just not use these within our blogs. I am confused about this. Any insight others have about this and recommendations on what action you would take is greatly appreciated.
On-Page Optimization | | RoyMcClean0