That would be desirable, so you do not get any dupe content penality. It is then a question of which way you prefer the juice to flow. If Andrea's page is more targetted and getting the links, then rel canonical links from the blog to those pages may be more benificial.
- Home
- oznappies
oznappies
@oznappies
Job Title: Software Engineer
Company: Dream Building Pty Ltd
Favorite Thing about SEO
Learning New Ideas
Latest posts made by oznappies
-
RE: Canonicals Url question
-
RE: Rel canonical with index follow on query string URLs
It shouldn't, but I would always place rel canonical as the first line to ensure that it is indexed first and all references are relative to that. This is a developer preference as it is a good design practice. This works on pages we host and create. You should also inform webmaster tool of any parameters you use and to ignore them,even though you have the rel canonical.
-
RE: Can Search Engines Read "incorrect" urls?
Search engines will read all your parameters unless you tell google with webmaster tools what parameters to ignore. This can cause an issue with the url like domain.com/topic?keyword&somefield then pages that include the keyword and other parameters will share the link juice. So, if you have 10 options of somefield you will get ~1/10 value per page indexed.
So, it is better for you to use rewrites to include your keyword in the url and then mark parameters to not be indexed in Goggle etc.
-
RE: Can Anyone show me a site that has followed the seomoz seo rules
I am not sure that it helps rankings yet, but it cannot hurt them and it will help at some point in the future. As search engines try harder to understand what sites are about, using contextual markup will help. These changes to HTML and also the schema.org rich snippets will be used in future and will help. SEO is evolutionally as are development practices and it is all about staying ahead of the competition when search engines change the playing field.
I do know following the guidelines here, getting strong relevent links built and ensuring a fast user experience helps rank on sites we built. Does it give us an advantage oer the competition? Maybe, only time will tell.
-
RE: Can Anyone show me a site that has followed the seomoz seo rules
We are setting up a site at http://www,dreambuilders.com.au which uses all those tags to seperate articles from navigation and the aside. It is still in development but the HTML 5 tags are set up.
Brett
-
RE: Can Anyone show me a site that has followed the seomoz seo rules
Sure Diane, Thanks. If HTML5 there are specific tags to denote type of content.
-
- means that the content between these tages is main content
-
<nav>- is the navigation links</nav>
-
<aside> - is subsiduary content, such as ad content and general information</aside>
This allows for seperation of interests and allows your site to have a logical flow and still provide contextual infromation about the content. If you look at our markup you see content wrapped in these tags.
-
-
RE: Can Anyone show me a site that has followed the seomoz seo rules
Hi, I am a developer and hire an external SEO to do the link building but we do the site optimization following guidelines pointed out here and using the Moz Tools for our site www.oznappies.com
We tag to HTML5 where it is clear what an article or main section is and navigation or subsidary links are, as these are defined in the standard. This means we have total control of content meaning that Google will index. I also noticed that Google is including site speed in their beta analytics and so we optimise for performance, using best practices and cdn for js libraries. It is worth running your site through www.gtmetrics.com to see where you have performance issues that will affect rank in the near future, as Google is aiming at 5sec load time for user experience.
We are a new site (3 months old) and have moved from 100+ to page 1 for all our targeted key phrases, including the most competitive ones. We have in-house content authors writing original content every couple of days and posting on relevant forums and blog comments. We are now in the process of taging as schema.org rich snippets to prepare for search engines factoring this in.
-
RE: Google Displays Domain / URL Above Description?
We are seeing the same on the Australian Google and we are a start up (3 months old) by we are still climbing in rank, so it has no adverse effect on us.
-
RE: URL Rewrite
I would get them in the shopper mindset and transfer it to the street. Ask them, do they go into a retail outlet and ask to see the '82374 in category?' as the shop assistant looks at them with a blank expression, or do they ask for a 'womens rain jacket'. When you bring it back to real life examples, I find customers understand what you are trying to convince them. So, why should it be different on line, if you want any rank benifit from the url it needs to have your key words in it. If you are targetting 'womens rain jacket' and you get a mention in a blog etc a anchor of 'www.company.com/womens/jackets/rain' still includes the keywords where as the cookie cutter url does not. It also makes the site look more professionally created than a DIY cookie cutter version.
Brent makes good points and you will see a inital wave ride in rank but it should bounce back higher. I like to also add Canonical head tags to make the new origin of the site's pages. I would also prepare a new sitemap and submit it, if there are a lot of pages, make the move in groups, with a resubmit after each group. We have had pages bounce back much quicker than 30 days too, some in as little as a week.
-
RE: Usage of Schema.org Microdata?
I figure brand in the case of a restraunt would be FoodEstablishment->LocalBusiness->Organization that sets your business branding across the site. Your would want to ensure any 404's that might happen go to the home or an informative search page, as a usability feature.
Brett
Best posts made by oznappies
-
RE: With a slash and without a slash
It is work having standardised endings via a url rewrite or 301 redirect. If both exist Google will flag it as duplicate content, which you will see in 'html suggestions' in webmaster tools. Since it is classed as duplicate (as if different case - upper/lower) the link juice will be split between the variartions.
-
RE: Can Search Engines Read "incorrect" urls?
Search engines will read all your parameters unless you tell google with webmaster tools what parameters to ignore. This can cause an issue with the url like domain.com/topic?keyword&somefield then pages that include the keyword and other parameters will share the link juice. So, if you have 10 options of somefield you will get ~1/10 value per page indexed.
So, it is better for you to use rewrites to include your keyword in the url and then mark parameters to not be indexed in Goggle etc.
-
RE: Google Displays Domain / URL Above Description?
We are seeing the same on the Australian Google and we are a start up (3 months old) by we are still climbing in rank, so it has no adverse effect on us.
-
RE: Does google scrape links from PDF files? do these links pass link juice?
Have a look at this article http://searchenginewatch.com/article/2067225/Google-Does-PDF-Other-Changes it explains some of the doc library search for pdf files and Google's statement here http://googleblog.blogspot.com/2008/10/picture-of-thousand-words.html.
-
RE: Wrong types of questions...
Coming from the other side of the fence as a developer and business owner and not an SEO person, I find that this is a great resource channel to discuss ideas. I will ask questions at time that will be basic, sometimes because I see different view on here or have a answer from the external SEO company we use. I like to know why something is done, not just that its is done. There are many responses I scan on here to find that information, and if I cannot find it I ask. Having spent many years on cutting edge software, I am used to asking questions in forums and just getting silence, it is refreshing in here to see prompt answers, support (thumbs up and good answer icons) and discussion on various topics.
The title of this discussion prompts me to say: 'The only wrong question is the one not asked .When you assume you know the answer to every problem then you become the problem.'
-
RE: Why doesnt Seomoz give daily ranking updates
It could, but it looks like it is moving to the Firefox toolbar tools soon. It's great for following those terms you are currently targetting, where subtle changes can make a difference in smaller markets than google.com.
-
RE: Canonical Tag for a 404 page
You could but, you would be better creating a search page so 404's go to www.example.com/search.aspx so users can search for the content they were actually looking for in the first place. Ideally all your pages should have the canonical in the head to ensure trailing / or capitalization errors all pass juice to the correct page and do not get reported as duplicates.
-
RE: Does google scrape links from PDF files? do these links pass link juice?
Yes it does according to Google tech spec http://code.google.com/apis/searchappliance/documentation/50/admin_crawl/Introduction.html
which specifically states if follows html links in pdf 'It follows HTML links in PDF files, Word documents, and Shockwave documents'. Google's own api docs carry more weight than a comment in a forum_._ If they are licencing this out as an application it would suggest that the same technology is available in the main engine as does Dunamis's comment about a listing in a pdf document being found in search results.
You can test for youself by publishing a pdf with a link to a info page that does not show up in any other links. Include the pdf in your sitemap but not the test page and check if it shows in googles index site:yoursite.com the next time it crawls.
This also gives some insight in an interview with Matt Cutts - http://www.stonetemple.com/articles/interview-matt-cutts-012510.shtml
Eric Enge: What about PDF files?
Matt Cutts: We absolutely do process PDF files. I am not going to talk about whether links in PDF files pass PageRank. But, a good way to think about PDFs is that they are kind of like Flash in that they aren't a file format that's inherent and native to the web, but they can be very useful. In the same way that we try to find useful content within a Flash file, we try to find the useful content within a PDF file. At the same time, users don't always like being sent to a PDF. If you can make your content in a Web-Native format, such as pure HTML, that's often a little more useful to users than just a pure PDF file.
-
RE: How long after making changes will position on Google be altered?
Submitting a current sitemap and help the process, at least to get the ball rolling. We tend to see a spike in crawl rate after doing this.
-
RE: Low bounce rate; need help troubleshooting code
If you visit gtmetrix.com and look at the 'timeline' you will see two javascript calls one to external-tracking-min.js and ga.js both are most likely calling analytics as on a js scanner I see ga.js being called twice.
Brett - C# developer specializing in digital media services.
Liara - Marketting, Author
Looks like your connection to Moz was lost, please wait while we try to reconnect.