Why is OSE showing no data for this URL?
-
Hi all,
Does anyone have any ideas as to why OSE might not have any data for this URL:
http://www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-j3
It is not a new page at all. It's been on the site for years.
Is OSE being quirky? Or is there an underlying problem with this page?
Thanks in advance for any light you can shed on this,
Dana
-
Hi Paul,
We discovered that the problem was being caused by a trailing "comma" at the end of the keyword string that we once used to populate the Meta keywords tag. Unfortunately, the keyword information in those fields is still being parsed. The parser did not know what to do when it encountered a comma followed by nothing.
We did run a query and found that this problem was affecting 128 of our product pages and had been for a long time. We haven't been populating the keywords for almost a year now, so the problem is at least that old.
The commas are now gone.
Thanks again to you and Andrew!
-
Glad I could help, Dana.
And yes, "borked" is a technical term. It's defined as existing in a badly broken state as a result of an inexperienced/inattentive user making unauthorised/incorrect changes to a website's code or content
Can also be used as a verb: "he borked the database so badly the whole site went 503".
Not that it's ever been applied to me or anything.
And yea - sometimes our tools can mislead us, even though the info they provided was "technically" correct.
Suggestion for a fast way to test the rest of the site for this kind of error: Use the paid version of Screaming Frog to program a search for a snippet of code that should be in the content area of every product page. Limit the crawl to the product pages category. (Or whatever sections of the site you're worried about.)
You could search for something as simple as class="productExtendedDescription" which would at least ensure the content container was there. Still wouldn't prove there was any content it it, but if you wanted to get fancy with regex, you could even do that too. You could also search for the tag, which would indicate that the rest of the pages' code likely exists.
Just an idea to speed up the testing process.
Paul
-
Thanks so much Paul,
Yes, when I ran a "Fetch as Googlebot" it returned a "Success" message, but when I looked at what Google is seeing there is no content on the page.
"borked" - great term...I am definitely going to have to file that one away for future use!
If the problem is isolated to this page, that's one thing. I am more concerned that this problem is effecting a larger number of pages.
Once I figure it out, I'll come back here and post what we found/fixed.
I really appreciate the comments from you and Andrew very much!
-
Dana, there's no content on that page.
The massive head section with all it's JavaScript is there, making it look like there's lots of code, but the actual body content has somehow been deleted.
This is all I see in the actual body of the page:
|
<form name="headerForm" action="IAFDispatcher" onsubmit="return submitQuery()" method="post">
That's it. There's no actual content, no footer, no closing or tag, which makes me think someone's actually deleted the content part of the code by accident.
Good luck figuring out who borked it
Paul
</form>
|
-
I just ran the source code for this page through the validator at: http://validator.w3.org/
There are a multitude of problems that need to be addressed. Thanks very much Andrew. I do have enough HTML knowledge to provide guidance to our IT manager on how to fix the problems. I don't have access to much of the source code, so it will certainly be a "project" to fix the issues.
I am sure these problems are everywhere all over the site, as many people with very little experience in coding and design have had their hands in the pot (so to speak) over the years.
At least this will allow me to prove to our CEO that our underlying code is indeed presenting a problem for indexing and crawling.
-
I did some comparisons with other pages and it doesn't seem that the drop-down frequency selector is the culprit. This page also has one: www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-h5
but the cache in Google seems to be fine for this page and OSE displays data for it just fine.
-
Could the coding issue be related to the drop down box that's located just above the pricing on the right hand side? That is one thing that makes this product page different from others on our site.
Thoughts?
-
I also see what you mean that there is a problem with Google's cache. The cache date is really old (April 11) and there is no preview of the page.
Anyone who can point me in the right direction?
-
Thanks so much for responding Andrew. I have suspected problems with our code for a long time, but I am not a coder, sp it's been a challenge to attempt to identify the specific problem.
I believe this is not just a problem with this page, but could be a problem across many pages on our site.
Can you are any of my fellow Mozzers point to what you are seeing in the source code that leads you to believe it is corrupted?
Many thanks for any help. I truly appreciate it!
Dana
-
Hi Dana,
I think your page is corrupted, I have copied a link to the sourcecode I am seeing http://pastebin.com/BRfFT4RR
It looks like Google Cache is also having problems with this page. Perhaps OSE had trouble too and so skipped the page?
- Andrew
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Latest Moz Data Update 2 Weeks Ago?
How often is the data supposed to be updated for our Moz campaigns? Mine have been updated on 07/15/2013 the last time. Isn't it supposed to be updated weekly?
Moz Pro | | sbrault740 -
5XX (Server Error) on all urls
Hi I created a couple of new campaigns a few days back and waited for the initial crawl to be completed. I have just checked and both are reporting 5XX (Server Error) on all the pages it tried to look at (one site I have 110 of these and the other it only crawled the homepage). This is very odd, I have checked both sites on my local pc, alternative pc and via my windows vps browser which is located in the US (I am in UK) and it all works fine. Any idea what could be the cause of this failure to crawl? I have pasted a few examples from the report | 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/bags.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/gloves.html 500 1 0 500 : Error http://everythingforthegirl.co.uk/index.php/accessories/purses.html 500 1 0 500 : TimeoutError http://everythingforthegirl.co.uk/index.php/accessories/sunglasses.html | 500 | 1 | 0 | Am extra puzzled why the messages say time out. The server dedicated is 8 core with 32 gb of ram, the pages ping for me in about 1.2 seconds. What is the rogerbot crawler timeout? Many thanks Carl
Moz Pro | | GrumpyCarl0 -
Is there a report that shows how many times your keywords are searched?
I see the keyword ranking report but I need a report that shows the number of searches on those keywords, Thanks!
Moz Pro | | AFSB0 -
OSE - Group by Domain Export
When exporting inbound link data from OSE, it seems to lose the "group by domain" filter. Is it possible to download the data like that or do I need to filter through all the links myself? Thanks!
Moz Pro | | cccreationsinc0 -
How can I see the URL's affected in Seomoz Crawl when Notices increase
Hi, When Seomoz crawled my site, my notices increased by 255. How can I only these affected urls ? thanks Sarah
Moz Pro | | SarahCollins0 -
Crawl Errors from URL Parameter
Hello, I am having this issue within SEOmoz's Crawl Diagnosis report. There are a lot of crawl errors happening with pages associated with /login. I will see site.com/login?r=http://.... and have several duplicate content issues associated with those urls. Seeing this, I checked WMT to see if the Google crawler was showing this error as well. It wasn't. So what I ended doing was going to the robots.txt and disallowing rogerbot. It looks like this: User-agent: rogerbot Disallow:/login However, SEOmoz has crawled again and it still picking up on those URLs. Any ideas on how to fix? Thanks!
Moz Pro | | WrightIMC0 -
Garbled URL's in Private Messages.
Every time I try to put a url in a Private message it gets garbled up with extra chars and then won't go to the right place. http://www.facebook.com/pages/Mariah-Carle-Photography Becomes: http://www.facebook.com/pages/Mariah58973jhsdfui-Carle%8594743Photography Ok after that test I deliberately garbled a url and it STILL work in the open forums....
Moz Pro | | Mcarle0 -
4xx (not found) errors seem spurious, caused by a "\" added to the URL
Hi SEOmoz folks We're getting a lot of 404 (not found) errors in our weekly crawl. However the weird thing is that the URLs in question all have the same issue. They are all a valid URL with a backsalsh ("") added. In URL encoding, this is an extra %5C at the end of the URL. Even weirder, we do not have any such URLs in our (Wordpress-based) website. Any insight on how to get rid of this issue? Thanks
Moz Pro | | GPN0