Grr . . . Just can't seem to get there
-
mrswitch.com.au is one site that we are consistantly struggling with . . . It has a page rank of 3 which beats most of the competitors, but when it comes to Google AU searches such as Sydney Electrician and Electrician Sydney etc, we just can't seem to get there and the rankings keep dropping.
We backlink and update the pages on a regular basis
Any ideas? - Could it be the custom CMS system?
-
Thanks for the help! - Perfect advice!
-
No probs Steve, glad I could help
-
Awesome, yeah that does help. Thanks
-
It's a micro format of HTML.
You can use it to, for want of a better word, 'tag' your address, so basically it is a way of saying to bots that hey, you know this string of characters that follows, it is an address... there is some conjecture as to its usefulness, but I believe it is best practice to use it, especially with local search focused projects.
More info here: http://microformats.org/wiki/hcard
And a nice tool here: http://microformats.org/code/hcard/creator
I hope that helps
-
What's the score with hcards? I don't know much about it.
-
Also, when you say you backlink regularly, from where? Try to get backlinks from local sites to local pages... i.e. Sydney Electrician: get backlinks from local sydney directories, blogs & sites about Sydney, etc...
I haven't checked any of these for whether they're paid, nofollow, etc... but just as examples:
http://www.sydney-city-directory.com.au/
http://www.sydneycity.net/directory.htm
http://www.sydneybusinessdirectory.net/
http://www.expat-blog.com/en/directory/oceania/australia/sydney/
http://sydney-city.blogspot.com/
http://blogs.usyd.edu.au/sydneylife/
And obviously mix that with relevant links to do with electricians... and preferably sites that are electrician and Sidney based combined if possible... easier said that done I expect.
The same for other trades and other towns.
-
I agree fully with what Ryan suggested above, if local is your target. Also consider using the hcard microformat on your address, as that can't hurt either.
-
Are you trying to get in the local listings? If that's the case just get the address on the page and start submitting through Google Local. If it's a chain submit the bulk listings and/or service areas. Pages like this: http://www.mrswitch.com.au/location are going to hurt you as the engines will see it as a keyword stuffing attempt to manipulate results around any Sydney suburb + "Electrician". I'd recommend getting rid of the location page in its current format and replace it with a few of your actual locations. Use your keywords sparingly, and use the tools Google provides, especially mapping and reviews.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
John Mueller says don't use Schema as its not working yet but I get markup conflicts using Google Mark-up
I watched recently John Mueller's Google Webmaster Hangout [DEC 5th]. In hit he mentions to a member not to use Schema.org as it's not working quite yet but to use Google's own mark-up tool 'Structured Data Markup Helper'. Fine this I have done and one of the tags I've used is 'AUTHOR'. However if you use Google's Structured Data Testing Tool in GWMT you get an error saying the following Error: Page contains property "author" which is not part of the schema. Yet this is the tag generated by their own tool. Has anyone experienced this before? and if so what action did you take to rectify it and make it work. As it stands I'm considering just removing this tag altogether. Thanks David cqbsdbunpicv8s76dlddd1e8u4g
Technical SEO | | David-E-Carey0 -
Sitemap issue? 404's & 500's are regenerating?
I am using the WordPress SEO plugin by Yoast to generate a sitemap on http://www.atozqualityfencing.com. Last month, I had an associate create redirects for over 200 404 errors. She did this via the .htaccess file. Today, there are the same amount of 404s along with a number of 503 errors. This new Wordpress website was constructed on a subdirectory and made live by simply entering some code into the .htaccess file in order to direct browsers to the content we wanted live. In other words, the content actually resides in a subdirectory titled "newsite" but is shown live on the main url. Can you tell me why we are having these 404 & 503 errors? I have no idea where to begin looking.
Technical SEO | | JanetJ0 -
Can Anybody Understand This ?
Hey guyz,
Technical SEO | | atakala
These days I'm reading the paperwork from sergey brin and larry which is the first paper of Google.
And I dont get the Ranking part which is: "Google maintains much more information about web documents than typical search engines. Every hitlist includes position, font, and capitalization information. Additionally, we factor in hits from anchor text and the PageRank of the document. Combining all of this information into a rank is difficult. We designed our ranking function so that no particular factor can have too much influence. First, consider the simplest case -- a single word query. In order to rank a document with a single word query, Google looks at that document's hit list for that word. Google considers each hit to be one of several different types (title, anchor, URL, plain text large font, plain text small font, ...), each of which has its own type-weight. The type-weights make up a vector indexed by type. Google counts the number of hits of each type in the hit list. Then every count is converted into a count-weight. Count-weights increase linearly with counts at first but quickly taper off so that more than a certain count will not help. We take the dot product of the vector of count-weights with the vector of type-weights to compute an IR score for the document. Finally, the IR score is combined with PageRank to give a final rank to the document. For a multi-word search, the situation is more complicated. Now multiple hit lists must be scanned through at once so that hits occurring close together in a document are weighted higher than hits occurring far apart. The hits from the multiple hit lists are matched up so that nearby hits are matched together. For every matched set of hits, a proximity is computed. The proximity is based on how far apart the hits are in the document (or anchor) but is classified into 10 different value "bins" ranging from a phrase match to "not even close". Counts are computed not only for every type of hit but for every type and proximity. Every type and proximity pair has a type-prox-weight. The counts are converted into count-weights and we take the dot product of the count-weights and the type-prox-weights to compute an IR score. All of these numbers and matrices can all be displayed with the search results using a special debug mode. These displays have been very helpful in developing the ranking system. "0 -
I'm getting duplicate content created with a random string of character added to the end of my blog post permalinks?
In an effort to clean up my blog content I noticed that I have a lot of posts getting tagged for duplicate content. It looks like ... http://carwoo.com/blog/october-sales-robust-stateside-european-outlook-poor-for-ford http://carwoo.com/blog/october-sales-robust-stateside-european-outlook-poor-for-ford/954bf0df0a0d02b700a06816f2276fa5/ Any thoughts on how and why this would be happening?
Technical SEO | | editabletext0 -
I always get this error "We have detected that the domain or subfolder does not respond to web requests." I don't know why. PLEASE help
subdomain www.nwexterminating.com subfolder www.nwexterminating.com/pest_control www.nwexterminating.com/termite_services www.nwexterminating.com/bed_bug_services
Technical SEO | | NWExterminating0 -
What can I do if Google Webmaster Tools doesn't recognize the robots.txt file?
I'm working on a recently hacked site for a client and and in trying to identify how exactly the hack is running I need to use the fetch as Google bot feature in GWT. I'd love to use this but it thinks the robots.txt is blocking it's acces but the only thing in the robots.txt file is a link to the sitemap. Unde the Blocked URLs section of the GWT it shows that the robots.txt was last downloaded yesterday but it's incorrect information. Is there a way to force Google to look again?
Technical SEO | | DotCar0 -
Issue with 'Crawl Errors' in Webmaster Tools
Have an issue with a large number of 'Not Found' webpages being listed in Webmaster Tools. In the 'Detected' column, the dates are recent (May 1st - 15th). However, looking clicking into the 'Linked From' column, all of the link sources are old, many from 2009-10. Furthermore, I have checked a large number of the source pages to double check that the links don't still exist, and they don't as I expected. Firstly, I am concerned that Google thinks there is a vast number of broken links on this site when in fact there is not. Secondly, why if the errors do not actually exist (and never actually have) do they remain listed in Webmaster Tools, which claims they were found again this month?! Thirdly, what's the best and quickest way of getting rid of these errors? Google advises that using the 'URL Removal Tool' will only remove the pages from the Google index, NOT from the crawl errors. The info is that if they keep getting 404 returns, it will automatically get removed. Well I don't know how many times they need to get that 404 in order to get rid of a URL and link that haven't existed for 18-24 months?!! Thanks.
Technical SEO | | RiceMedia0 -
If non-paying customers only get a 2 min snippet of a video, can my video length in sitemap.xml be the full length?
I am working on a website that all of its primary contents are videos. They have an assortment of free videos, but the majority or viewable only with a subscription to the site. If you don't have a subscription, you can see a 2 min video clip of the contents of the video. But all the videos can be anywhere from 10min to 1.5 hours. When I am auto-generating the sitemap.xml, can I put the full length of the videos for paying members in the XML in the video:duration property? Or because publicly only 2 minutes is available (unless you pay for a membership) is that frowned upon?
Technical SEO | | nbyloff0