Why is OSE showing no data for this URL?
-
Hi all,
Does anyone have any ideas as to why OSE might not have any data for this URL:
http://www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-j3
It is not a new page at all. It's been on the site for years.
Is OSE being quirky? Or is there an underlying problem with this page?
Thanks in advance for any light you can shed on this,
Dana
-
Hi Paul,
We discovered that the problem was being caused by a trailing "comma" at the end of the keyword string that we once used to populate the Meta keywords tag. Unfortunately, the keyword information in those fields is still being parsed. The parser did not know what to do when it encountered a comma followed by nothing.
We did run a query and found that this problem was affecting 128 of our product pages and had been for a long time. We haven't been populating the keywords for almost a year now, so the problem is at least that old.
The commas are now gone.
Thanks again to you and Andrew!
-
Glad I could help, Dana.
And yes, "borked" is a technical term. It's defined as existing in a badly broken state as a result of an inexperienced/inattentive user making unauthorised/incorrect changes to a website's code or content
Can also be used as a verb: "he borked the database so badly the whole site went 503".
Not that it's ever been applied to me or anything.
And yea - sometimes our tools can mislead us, even though the info they provided was "technically" correct.
Suggestion for a fast way to test the rest of the site for this kind of error: Use the paid version of Screaming Frog to program a search for a snippet of code that should be in the content area of every product page. Limit the crawl to the product pages category. (Or whatever sections of the site you're worried about.)
You could search for something as simple as class="productExtendedDescription" which would at least ensure the content container was there. Still wouldn't prove there was any content it it, but if you wanted to get fancy with regex, you could even do that too. You could also search for the tag, which would indicate that the rest of the pages' code likely exists.
Just an idea to speed up the testing process.
Paul
-
Thanks so much Paul,
Yes, when I ran a "Fetch as Googlebot" it returned a "Success" message, but when I looked at what Google is seeing there is no content on the page.
"borked" - great term...I am definitely going to have to file that one away for future use!
If the problem is isolated to this page, that's one thing. I am more concerned that this problem is effecting a larger number of pages.
Once I figure it out, I'll come back here and post what we found/fixed.
I really appreciate the comments from you and Andrew very much!
-
Dana, there's no content on that page.
The massive head section with all it's JavaScript is there, making it look like there's lots of code, but the actual body content has somehow been deleted.
This is all I see in the actual body of the page:
|
<form name="headerForm" action="IAFDispatcher" onsubmit="return submitQuery()" method="post">
That's it. There's no actual content, no footer, no closing or tag, which makes me think someone's actually deleted the content part of the code by accident.
Good luck figuring out who borked it
Paul
</form>
|
-
I just ran the source code for this page through the validator at: http://validator.w3.org/
There are a multitude of problems that need to be addressed. Thanks very much Andrew. I do have enough HTML knowledge to provide guidance to our IT manager on how to fix the problems. I don't have access to much of the source code, so it will certainly be a "project" to fix the issues.
I am sure these problems are everywhere all over the site, as many people with very little experience in coding and design have had their hands in the pot (so to speak) over the years.
At least this will allow me to prove to our CEO that our underlying code is indeed presenting a problem for indexing and crawling.
-
I did some comparisons with other pages and it doesn't seem that the drop-down frequency selector is the culprit. This page also has one: www.ccisolutions.com/StoreFront/product/shure-slx24-sm58-wireless-microphone-system-h5
but the cache in Google seems to be fine for this page and OSE displays data for it just fine.
-
Could the coding issue be related to the drop down box that's located just above the pricing on the right hand side? That is one thing that makes this product page different from others on our site.
Thoughts?
-
I also see what you mean that there is a problem with Google's cache. The cache date is really old (April 11) and there is no preview of the page.
Anyone who can point me in the right direction?
-
Thanks so much for responding Andrew. I have suspected problems with our code for a long time, but I am not a coder, sp it's been a challenge to attempt to identify the specific problem.
I believe this is not just a problem with this page, but could be a problem across many pages on our site.
Can you are any of my fellow Mozzers point to what you are seeing in the source code that leads you to believe it is corrupted?
Many thanks for any help. I truly appreciate it!
Dana
-
Hi Dana,
I think your page is corrupted, I have copied a link to the sourcecode I am seeing http://pastebin.com/BRfFT4RR
It looks like Google Cache is also having problems with this page. Perhaps OSE had trouble too and so skipped the page?
- Andrew
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How to choose the best canonical URL
In a duplicate content situation, and assuming that both rel=canonical and a 301 redirect pass link equity (I know there is still some speculation on this), how should you choose the "best" version of the URL to establish as the redirect target or authoritative URL? For example, we have a series of duplicate pages on our site. Typically we choose the "cleanest" or shortest non-trailing-slash version of the URL as the canonical, but what if those pages are already established and have varying page authority/backlink profiles? The URLs are: example.com/stores/locate/index?parameters=tags - PA = 54, Inbound Links = 259 example.com/stores/locate/index - PA = 60, Inbound Links = 302 example.com/stores/ - This is the version that currently ranks. PA = 42, Inbound Links = 3 example.com/stores - PA = 40, Inbound Links = 8 This might not really even matter, but in the interests of conserving as much SEO value as possible, which would you choose as either the 301 redirect target and/or the canonical version? My gut is to go with the URL that's already ranking (example.com/stores/) but curious if PA, backlinks, and trailing slashes should be considered also. We of course would not 301 the URL with the tracking parameters. 🙂 Thanks for your help!
Moz Pro | | Critical_Mass0 -
Special Characters in URL & Google Search Engine (Index & Crawl)
G'd everyone, I need help with understanding how special characters impact SEO. Eg. é , ë ô in words Does anyone have good insights or reference material regarding the treatment of Special Characters by Google Search Engine? how Page Title / Meta Desc with Special Chars are being index & Crawl Best Practices when it comes to URLs - uses of Unicode, HTML entity references - when are where? any disadvantage using special characters Does special characters in URL have any impact on SEO performance & User search, experience. Thanks heaps, Amy
Moz Pro | | LabeliumUSA0 -
Crawl Diagnostics shows two title and meta tag errors but they are false positives.
I got one hit each on "Missing Meta Description Tag" and "Title Missing or Empty" but in the source of my page they are clearly there: <title>Protein Powder | Compare and Get the Best Prices</title> <meta name="keywords" content="protein powder, whey protein, protein supplement, whey protein isolate, hydrolyzed whey" /> I understand there are conventions which may or may not be followed by Drupal (I read an earlier question where ordering and W3C conventions were suggested) but i'm not sure how to fix them given Drupal will just overwrite any hand editing the next time something is built and importantly, I can't get the crawl to work on cue - it works on the automatic once a week crawl in the main campaign summary but every time I've specifically used the Crawl Test tool it gives me a "There was an error submitting your request to the crawler. Please try again later" so I can't really test any changes. Given Google seems to be recognising the title tag - ie showing it in the results - Do I put this down as seomoz just not working? Kind Regards, Brian
Moz Pro | | btrr690 -
On-Page URL
Hopefully I am missing something basic... I can't see how to specifically add and delete On-Page reports. It seems like running a report adds it but how to delete? Also, how does one change the URL for a report? I have re-organized some pages and can't seem the get the on-page report to keep my URL change. Here is what I tried. From the On-Page report card for a keyword I changed the URL and ran the test. Test runs ok but if I navigate back to the summary my old bad URL is still there.
Moz Pro | | Banknotes0 -
URLs getting re-directed to double http:// URLs
The "Notices" section under "Crawl Diagnostics" shows that there are 435 issues on my website. I checked out a few URLs to verify this issue and found that most of these pages are working perfectly. For instance, the above mentioned report shows that http://policycomplaints.com/about redirects to http://http://policycomplaints.com/about/ . Then, http://policycomplaints.com/aegon-religare/mis-selling-of-policy-by-aegon-religare/ redirects to http://http://policycomplaints.com/aegon-religare/mis-selling-of-policy-by-aegon-religare/ . However, when I open these pages, they seem to be working perfectly. I didn't find them getting re-directed to somewhere else. So, as per the report, it seems that all of these 435 http://URLs are getting re-directed to http://http://URL versions which in reality is not true because all the http://URLs are working perfectly. So, is this a problem with SEOmoz software? If not, what is the reason for these issues and how can I adddress them. Do notify if any further information is required for the same. Thanks. bNiEm.png
Moz Pro | | unknownID10 -
Do we have videos tutorials showing how to use the tools from SEOmoz?
I recently sign up with you guys and watch several videos but can't find tutorials on how to use the incredible tools here... please advice! Many thanks in advance. Bira
Moz Pro | | cssyes0 -
Campaign 4XX error gives duplicate page URL
I ran the report for my site and had many more 4xx errors than I've had in the past month. I updated my .htaccess to include 301 statements based on Google Webmaster Tools Crawl Errors. Google has been reporting a positive downward trend in my errors, but my SEOmoz campaign has shown a dramatic increase in the 4xx pages. Here is an example of an 4xx URL page: http://www.maximphotostudio.net/engagements/266/inniswood_park_engagements/http:%2F%2Fwww.maximphotostudio.net%2Fengagements%2F266%2Finniswood_park_engagements%2F This is strange because URL: http://www.maximphotostudio.net/engagements/266/inniswood_park_engagements/ is valid and works great, but then there is a duplicate entry with %2F representing forward slashes and 2 http statements in each link. What is the reason for this?
Moz Pro | | maximphotostudio1 -
Group by domain broken in OSE?
When trying to browse the results using the "group by domain" filter in OSE, when you click on the next page or one of the page numbers the results go back to the ungrouped list.
Moz Pro | | IanTheScot0