SEOmoz Crawl CSV in Excel: already split by semicolon. Is this Excel's fault or SEOmoz's?
-
If for example a page title contains a ë the .csv created by the SEOmoz Crawl Test is already split into columns on that point, even though I haven't used Excel's text to columns yet.
When I try to do the latter, Excel warns me that I'm overwriting non-empty cells, which of course is something I would rather not do since that would make me lose valuable data.
My question is: is this something caused by opening the .csv in Excel, or earlier in the process when this .csv is created?
-
Thanks a lot for this question and answer. I've been cursing for months and months on this issue.
The solution suggested by Barry is TOP! Download Open Office, even if it's just to convert CSV correctly to columns.
-
In hindsight, the solution offered on that page does not do the trick. I guess I will keep using my workaround...
-
Thanks for the quick response. Open Office is something I should have checked first and indeed, opening the .csv in Open Office doesn't cause semicolons to split columns.
A workaround in Excel I've just now tried is making an empty spreadsheet and importing external data from the .csv, which lets me deselect the semicolon as separator.
I've also set the separator to something else in the Windows Language Settings, thanks for that link.
-
Think it's more likely a PC 'fault'.
You probably want to list a comma rather than a semi-colon as a separator - http://office.microsoft.com/en-us/excel-help/import-or-export-text-txt-or-csv-files-HP010099725.aspx#BMchange_the_separator_in_all_.csv_text
If you use open office you can choose what you want to use when you first open each file.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Is SEOmoz spider ignoring my redirect?
My client was previously serving their website from both a .co.uk and a .com domain. The DNS for each of these domains was pointing to the same place, rather than redirecting. I saw this as a potential duplicate content problem so I set the .co.uk to 301 redirect on to the .com. As a user, the 301 seems to be working correctly. However, now that I have done this, SEOmoz is picking up thousands of "inbound" links from the .co.uk domain. Essentially, every single link on the internal site, is being duplicated in my stats as an inbound link as well. It appears that the spider is ignoring the redirect. I'm not sure if it's a legitimate issue that will upset Google too, or if it's just a bug with SEOMoz's spider.
Moz Pro | | MadisonSolutions0 -
Why does SEOMoz only crawl 1 page of my site?
My site is: www.thetravelingdutchman.com. It has quite a few pages, but for some reason SEOMoz only crawls one. Please advise. Thanks, Jasper
Moz Pro | | Japking0 -
Crawl Diagnostics - unexpected results
I received my first Crawl Diagnostics report last night on my dynamic ecommerce site. It showed errors on generated URLs which simply are not produced anywhere when running on my live site. Only when running on my local development server. It appears that the Crawler doesn't think that it's running on the live site. For example http://www.nordichouse.co.uk/candlestick-centrepiece-p-1140.html will go to a Product Not Found page, and therefore Duplicate Content errors are produced. Running http://www.nhlocal.co.uk/candlestick-centrepiece-p-1140.html produces the correct product page and not a Product Not Found page Any thoughts?
Moz Pro | | nordichouse0 -
SEOMOZ Crawling Our Site
Hi there, We get a report from SEOMOZ every week which shows our performance within search. I noticed for our website www.unifor.com.au that it looks through over 10,000 pages, however our website sells less than 500 products so not sure why or how so many pages are trawled? If someone could let me know that would be great. It uses up a lot of bandwidth doing each of these searches so if the amount of pages being trawled reduced it would definitely assist. Thanks, Geoff
Moz Pro | | BeerCartel750 -
Moz crawling
Hi Everyone! I'm new to the SEOMoz and wanted to find out if there is a way to decrease the waiting time for the campaign crawl. I have made a lot of changes based on the first crawl and would like to see how these are reflected on the reports, but can't until the next crawl is performed. Any help would be greatly appreciated.
Moz Pro | | coremediadesign0 -
How to remove Duplicate content due to url parameters from SEOMoz Crawl Diagnostics
Hello all I'm currently getting back over 8000 crawl errors for duplicate content pages . Its a joomla site with virtuemart and 95% of the errors are for parameters in the url that the customer can use to filter products. Google is handling them fine under webmaster tools parameters but its pretty hard to find the other duplicate content issues in SEOMoz with all of these in the way. All of the problem parameters start with ?product_type_ Should i try and use the robot.txt to stop them from being crawled and if so what would be the best way to include them in the robot.txt Any help greatly appreciated.
Moz Pro | | dfeg0 -
Why won't scheduled crawl of my site begin?
I currently have a campaign running on SEOMoz for over a month. It has been showing that a crawl was scheduled to start on 12/21. Now it's 12/23 and there has not been a new crawl, and it still says scheduled for 12/21.. Anyone know why this is happening or how to fix it? Thanks
Moz Pro | | Prime850 -
What causes Crawl Diagnostics Processing Errors in seomoz campaign?
I'm getting the following error when seomoz tries to spider my site: First Crawl in Progress! Processing Issues for 671 pages Started: Apr. 23rd, 2011 Here is the robots.txt data from the site: Disallow ALL BOTS for image directories and JPEG files. User-agent: * Disallow: /stats/ Disallow: /images/ Disallow: /newspictures/ Disallow: /pdfs/ Disallow: /propbig/ Disallow: /propsmall/ Disallow: /*.jpg$ Any ideas on how to get around this would be appreciated 🙂
Moz Pro | | cmaddison0