Are the CSV downloads malformatted, when a comma appears in a URL?
-
Howdy folks, we've been a PRO member for about 24 hours now and I have to say we're loving it! One problem I am having with however is a CSV exported from our crawl diagnostics summary that I've downloaded.
The CSV contains all the data fine, however I am having problems with it when a URL contains a comma. I am making a little tool to work with the CSVs we download and I can't parse it properly because there sometimes URLs contain commas and aren't quoted the same as other fields, such as meta_description_tag, are.
Is there something simple I'm missing or is it something that can be fixed?
Looking forward to learn more about the various tools. Thanks for the help.
-
I won't be too hard on the programmers - I'm a programmer myself. Our small business has developers and designers doing the bulk of the SEO. I can see you've looked in to it as I have - there are many factors involved if I was to decide to "fix" this myself. To be honest, I don't fancy it - I'm hoping the better approach will come from the wonderful SEO Moz developers who might put in a fix. Hint hint.
-
The first rule in this business is "You can't trust programmers"
I should know, I am a programmer and I used to manage teams of them.
You can't trust them to write something perfect, because they will always make huge assumptions, based on what they know.
They should know that URLs can contain commas, and they should quote them.
If they didn't do that in the final field, it is a deficiency in the code and your stuff isn't going to workunless you fix it manually.
What you need to do to fix this is to add a quote after the 10th comma and also add one at the end of each line.
Unfortunately, even that is a problem.
The problem is there are other fields that may not be quoted, some of which can start with http://
There can also be line breaks in the title field, and possibly even in the link text field.
Quotes and other characters are escaped with double quotes.
Titles and link text can also contain commas, so it is very complex.
Some of the fields are a bigger mess because it depends on the link text, and if the link text contains an image, you'll have quotes and equals signs, commas and all kinds of stuff. You can also have upper ascii characters and multibyte characters.
They did actually quote the first URL, if it contains commas.
They really should have quoted every field
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate Site Content found in Moz; Have a URL Parameter set in Google Webmaster Tools
Hey, So on our site we have a Buyer's Guide that we made. Essentially it is a pop-up with a series of questions that then recommends a product. The parameter ?openguide=true can be used on any url on our site to pull this buyer's guide up. Somehow the Moz Site Crawl reported each one of our pages as duplicate content as it added this string (?openguide=true) to each page. We already have a URL Parameter set in Google Webmaster Tools as openguide ; however, I am now worried that google might be seeing this duplicate content as well. I have checked all of the pages with duplicate title tags in the Webmaster Tools to see if that could give me an answer as to whether it is detecting duplicate content. I did not find any duplicate title tag pages that were because of the openguide parameter. I am just wondering if anyone knows:
Moz Pro | | MitchellChapman
1. a way to check if google is seeing it as duplicate content
2. make sure that the parameter is set correctly in webmaster tools
3. or a better way to prevent the crawler from thinking this is duplicate content Any help is appreciated! Thanks, Mitchell Chapman
www.kontrolfreek.com0 -
Recovering rankings after a botched url change
Hi there, I have for a long time had a bicycle maintenance website at madegood.org. Over the years the film branch of this business has taken off and moved in a slightly different direction, so I thought in March I decided to move madegood.org to madegoobikes.com, and create a new website for my film business at madegood.com. I thought I did a good job of telling google about my change of domain, but my rankings completely died, so about a month I moved madegoodbikes.com back to madegood.org. So far I haven't seen any sign of a recovery in my rankings, I'm getting almost no visits. I've check all my top pages on OSE and everything seems to be in place. https://moz.com/researchtools/ose/pages?site=http%3A%2F%2Fwww.madegood.org%2F&no_redirects=0&sort=page_authority&filter=all&page=1 Is it normal to wait over a month for my rankings to recover, or is there anything else I should be doing? Any tips/ideas/advice whatsoever will of huge help!
Moz Pro | | madegood0 -
Moz Crawl Report more urls?
Hi. I have used Moz Crawl Test and get my 3,000 urls crawled no issue. However, my site has more than that, is it possible to crawl the entire website? Alot of the crawl urls in the Moz test are search string urls and filters so Ive probably wasted about 2,500 urls on filter urls. Any advise or alternative software that wont cost a fortune?
Moz Pro | | YNWA
Thanks0 -
Block Moz (or any other robot) from crawling pages with specific URLs
Hello! Moz reports that my site has around 380 duplicate page content. Most of them come from dynamic generated URLs that have some specific parameters. I have sorted this out for Google in webmaster tools (the new Google Search Console) by blocking the pages with these parameters. However, Moz is still reporting the same amount of duplicate content pages and, to stop it, I know I must use robots.txt. The trick is that, I don't want to block every page, but just the pages with specific parameters. I want to do this because among these 380 pages there are some other pages with no parameters (or different parameters) that I need to take care of. Basically, I need to clean this list to be able to use the feature properly in the future. I have read through Moz forums and found a few topics related to this, but there is no clear answer on how to block only pages with specific URLs. Therefore, I have done my research and come up with these lines for robots.txt: User-agent: dotbot
Moz Pro | | Blacktie
Disallow: /*numberOfStars=0 User-agent: rogerbot
Disallow: /*numberOfStars=0 My questions: 1. Are the above lines correct and would block Moz (dotbot and rogerbot) from crawling only pages that have numberOfStars=0 parameter in their URLs, leaving other pages intact? 2. Do I need to have an empty line between the two groups? (I mean between "Disallow: /*numberOfStars=0" and "User-agent: rogerbot")? (or does it even matter?) I think this would help many people as there is no clear answer on how to block crawling only pages with specific URLs. Moreover, this should be valid for any robot out there. Thank you for your help!0 -
How to read Crawler downloaded report
I am trying to seperate the duplicate title and description URLs, by looking at the report i am not getting how to find all urls which contain same title and description. Is there any video link on the site which walk me through each part of the report. Thanks, Punam
Moz Pro | | nonlinearcreations0 -
Did the SERP Overlay CSV export disappear?
I'm using Google Chrome with SERP Overlay and it use to say Export to CSV... now it says Get Keyword Difficulty Report. Was this purposely removed? This is one feature that I really liked (SEOQuake does this)... basically where I can export the search results with relevant metrics.
Moz Pro | | WrightIMC0 -
How do I delete a url from a keyword campaign
I have a couple of urls that are associated with the keywords in my campaign. They are no longer valid so how do I remove them?
Moz Pro | | PerriCline0 -
Any tools for scraping blogroll URLs from sites?
This question is entirely in the whitehat realm... Let's say you've encountered a great blog - with a strong blogroll of 40 sites. The 40-site blogroll is interesting to you for any number of reasons, from link building targets to simply subscribing in your feedreader. Right now, it's tedious to extract the URLs from the site. There are some "save all links" tools, but they are also messy. Are there any good tools that will a) allow you to grab the blogroll (only) of any site into a list of URLs (yeah, ok, it might not be perfect since some sites call it "sites I like" etc.) b) same, but export as OPML so you can subscribe. Thanks! Scott
Moz Pro | | scottclark0