As long as these links are editorial given and are coming from good domains, I think there should be a lot of benefit in them. Of course links from related domains are better (if only for referral traffic), but I personally don't think they'll be devalued or be seen as manipulative.
Best posts made by Theo-NL
-
RE: The Value of Off-Topic Guest Posts
-
RE: Non-Recognition of Links
The fact that OSE doesn't pick up a link doesn't necessarily mean a link isn't 'active' and giving your site value. Even though Linkscape captures a vast amount of URLs, it only crawls a portion of the web, most likely from the bigger pages down. If many of these links to your site are coming from smaller / less powerful domains, they might not (yet) have been picked up by Linkscape.
Try looking at Google Webmaster central to see if the links are included there. If Google lists them as links there, they are very likely to be counted by them as well.
-
RE: Which pages to "noindex"
Noindexed pages are pages that you want your link juices flowing through, but not have them rank as individual entries in the search engines.
-
I think your legal pages should rank as individual pages. If I wanted to find your privacy policy and searched for 'privacy policy company name', I'd expect to find an entry where I can click and find your privacy policy
-
Your search results page (the internal ones) are great candidates for a noindex attribute. If a search engine robot happens to stumble upon one (via a link from somebody else for example), you'd want the spider to start crawling pages from there and spreading link juice over your site. However, under most circumstances you don't want this result page to rank on itself in the search engines, as it usually offers thin value to your visitors
-
Blog archive and category pages are useful pages to visitors and I personally wouldn't noindex these
Bonus: your paginated results ('page 2+ in a result set that has multiple pages') are great candidates for noindex. It'll keep the juices running, without having all these pretty much meaningless (and highly dynamic) pages in the search index.
-
-
RE: How to quickly up your PA/DA?
You can't 'quickly' raise your PA/DA in a legit and normal way.
Aside from the caps that Google is said to have on your allowed growth/time, raising these values requires high amounts of backlinks from strong domains. Focus on creating top quality content that your target audience wants to read, share and link to. A higher PA/DA will follow.
-
RE: A tool for seeing all the keywords a website ranks for
There was very recent blog by Dr. Pete on the SEOmoz blog about this very subject, you might find it interesting:
-
RE: Getting rid of duplicate content with rel=canonical
To me this sounds like a clear-cut case of a need for the canonical tag.
-
RE: NEW LINKS
I'm afraid I can't give you any specific number to aim for. Perhaps there is somebody else with experience on what a truly natural amount of new links is for your type of website (which we don't know yet by the way)?
What you could try to look at: Would (any) of these links have occured if it wasn't for your manual effort? Without your effort, at what rate would this particular domain have acquired links?
For example: if this is a site about one specific type of highly niche screwdrivers, it would be far less likely to attract a significant number of backlinks per day than a tech blog that regularly has big scoops.
-
RE: Certain Domains no longer recognised by open site explorer
I personally doubt these links (or any links for that matter) are 'decaying'.
Knowing that SEOmoz only crawls a portion (however large, but still a portion) of the pages on the web, these pages that were once showing up containing your links, may simply not be crawled anymore due to being 'not important enough'.
To either strengthen or break the theory above: are these 'decaying' links coming from authorative / strong domains or from weaker domains that could possibly be fallen out of the crawling range of the SEOmoz crawler?
-
RE: Should I create mini-sites with keyword rich domain names pointing to my main site?
"What you describe is ok, no policies are violated."
I strongly disagree on this one. What he describes sounds an awful lot like Google's description of a 'doorway page', which are explicitly against Google Webmaster Guidelines:
"Doorway pages are typically large sets of poor-quality pages where each page is optimized for a specific keyword or phrase. In many cases, doorway pages are written to rank for a particular phrase and then funnel users to a single destination." (http://www.google.com/support/webmasters/bin/answer.py?answer=66355)
In light of the above I would advice against the practice your describing. Instead I would optimize (sections) of the main site for such keywords or set up independent (complete and value adding) domains for these keywords you'd like to rank for.
-
RE: List of High Ranking Directories
This one might be a bit outdated, but it contains some great directories: http://www.seomoz.org/directories
-
RE: Can't find email address or contact form on website I want link from
If you want to get in contact with them, you can just use the Twitter account or Facebook fan page? What additional benefits does having an email address over having their Facebook and Twitter data? If they wanted you to be able to email them, they would've put their email address or a contact form on their website.
-
RE: Https indexed - though a no index no follow tag has been added
Assuming the website we're talking about is the same as in your email address, I find the meta robots tag
For the page https:// webshop . acsi . eu / en / checkout / onepage / (spaced to prevent indexation of this post).
Are you sure the right meta tags are in place?
-
RE: What to do about bad backlinks hurting authority?
I wouldn't worry too much about a couple of bad backlinks. Like you proposed yourself I would just go after many more quality backlinks which will make these couple of bad links just another part of your natural link profile.
Added to the above is the fact that I recall Matt Cutts saying that incoming links can never hurt your rankings (assuming they aren't part of some kind of bigger link scheme), unfortunately I can't seem to find the source for that quote anymore.
No worries!
-
RE: Site rank checking tool
Dr. Pete wrote an excellent blog about this a while ago: http://www.seomoz.org/blog/what-keywords-do-i-rank-for
-
RE: Forums to increase the Link Authority
As long as you keep it natural and view it more as a source of traffic rather than SEO metrics, you should be fine posting on forums, blogs, guest posts, etc.
-
RE: Ranking on page 5 for a 1% difficulty keyword
Page 5 for a 1% keyword at Google, with normal rankings at Bing and Yahoo looks like a -50 penalty from Google to me. The dodgy insurance content that was on the domain before might be the cause of this.
More on Google penalities:
-
RE: Updated title tags not displayed in SERPs?
If you visit the pages yourself and the website shows the correct (new) title in the title bar and address bar of your browser, Google will find out sooner or later and start using the new titles and URLs. Patience is all it takes!
-
RE: Somebody hacked many sites and put links to my sites in hidden div
Discounted? Maybe. Penalized? Not sure.
Google will have a hard time reading the intention of these links. How can they be sure these links were placed by the competitors of Anton rather than Anton himself?
-
RE: Can you optimize for 2 keywords per URL?
You can optimize pages for multiple keywords, but I'd be careful about using your URLs for that. The URL should preferably contain the subject the page is about (which could or should coincide with your major keyword). Adding to many different keywords to your URLs will look spammy and you don't want that.
-
RE: A domain is ranking for a plural key word in SERPs on page 1 but for the singular not at all?
I don't think this is (in most cases) related to penalties.
My best guess would be exactly keyword optimization (somebody optimizing his page for 'book' rather than 'books' will most likely rank higher for 'book' than he ranks for 'books) and incoming anchor text.
-
RE: URL with two forward slashes //
The // is most likely the result of a faulty setting in the (url routing) system and should be something that can be fixed. As for both the // and the /cpage: is it optimal? Most certainly not, neither for search engines nor for visitors, will it harm your rankings badly, I don't think so. Even though it is something you'd rather not have in your url, all other things being perfect you'll probably won't notice this in your rankings.
-
RE: Google Plus 1 button appearance
Not even the customization page (http://code.google.com/apis/+1button/) mentions anything about changing the looks of the button.
It does make sense for Google to force this design how from their perspective. This way everybody will connect the looks of this particular button with the +1 service, even though this is indeed unfortunate for end users who want it to blend in with the rest of their design.
-
RE: How far into a page will a spider crawl to look for text?
Far, far more than 3kb. Somewhere halfway this blog (http://www.finishjoomla.com/blog/41/does-source-code-ordering-still-matter-for-seo/) you'll find some references to sources on this same issue, they might be helpful for you.
-
RE: Nofollowed internal links from the home page
This is called 'pagerank sculpting'. It was used in the past to make sure the links that didn't have nofollow received a larger amount of link juice (as the link juice that was passed by a page was split by the number of links back then).
There was a 'recent' change in the way internal nofollow links are treated, that renders this practice pretty much useless nowadays. Read more about it here: http://www.seomoz.org/blog/link-consolidation-the-new-pagerank-sculpting
-
RE: Optimizing Internal Links to Homepage
Research by SEOmoz has shown there is little to no value in having optimized anchor text for your home page:
http://www.seomoz.org/blog/testing-the-value-of-anchor-text-optimized-internal-links
-
RE: First link count - passing of pagerank
I'm pretty sure it only applies to anchor text. PR will be evenly split over the links you put on a page (both internal and external), where not even a rel="nofollow" will change the distribution of linkjuice that a link will pass.
-
RE: Does Code Order Matter?
I got the same question during a presentation I was giving recently and have to admit I didn't knew the answer on the spot either. Some thinking and discussing with others has given me a pretty clear picture on this though, which I will now try to pass on to you.
I don't think code order matters that much anymore. A couple years ago, when Google was crawling only a portion of a large webpage (mostly due to hardware restrictions), you'd better make sure your valuable text or links were placed in the first part of your webpage, otherwise it wouldn't get crawled at all! With Google crawling large webpage in total (if I recall some quote from Matt Cutts correctly he stated that they now index webpages over several MBs in size, as long as they contain enough valuable information).
With Google advancing their detection of the 'visual location' where text and links are placed on a webpage (see #5 on http://www.seomoz.org/blog/10-illustrations-on-search-engines-valuation-of-links), source code ordering will most likely have dropped in value as well. Using CSS styling, we can now order our source code pretty much at will anyway, which has changed it from a valid signal to a 'SEO trick' (just like adding a suffix to the URL has, see http://www.finishjoomla.com/blog/5/does-adding-a-suffix-to-my-urls-affect-my-seo/).
By 'viewing' (and perhaps manually categorizing or using machine learning http://en.wikipedia.org/wiki/Machine_learning) webpages Google will notice patterns in webpage source code. For example: 'that div containing a large amount of links, usually placed in an ul-li, often containing links to 'home' and 'contact' will most likely be your menu. Just like 'that div containing more text than any other div, often starting with a H1 or H2 tag, containing the most images and ending with a call-to-action' link will most likely be your page content area. Thus, Google doesn't 'know' whether a certain part of your source code is your menu, your sidebar or your page content, it deducts it by looking at common patterns.
(lol, my answer is more than three times as long as your short question!)
-
RE: Optimizing a website which uses JavaScript and jQuery
Google will see the content as plain text, check for example seobrowser.com.
However, there is a chance that Google is parsing the CSS to find out that this particular content is hidden, and therefore devalues it. I don't recall ever reading about this, but it would make some sense because many visitors won't see the text.
-
RE: Does Code Order Matter?
Perhaps HTML5 tags are used as one of the signals in determining which piece of source code is what. Seeing how easily one could manipulate these tags, I don't think it'll be a strong signal though. Of course it can be a good guidance for future web developers to identify pieces of source code!
Google is able to read CSS files (for example to determine if a link is hidden), but I don't think it will parse these files and apply them to the webpage to determine the visual layout of it. I think it would require a great amount of processing power (and time) to actually render a webpage, rather than sort out the pieces based on the source code like I described in my answer above.
Glad I could help!
-
RE: Once duplicate content found, worth changing page or forget it?
Changing them will definitely still matter! Google will periodically visit your (and every other) website to see if content has changed, or new content can be found. Once you make sure the content isn't duplicate anymore, Google will discover this and might (or might not) act upon does.
-
RE: Google and display:none
I agree with Aran, as long as you use the display:none in a legitimate way, Google is unlikely to penaltize you for it. Even if they got as far as manually reviewing your website, they would see you've intended no foul with the display:none.
-
RE: Does it sound like a linkwheel to you?
You're welcome.
As long as you keep this interlinking between those blogs natural (and in order to benefit your visitors), you could do this, yes.
-
RE: Links from my homepage
In my opinion this should be far more focussed on what a visitors wants to see here instead of what a search robot should encounter.
Make sure your main navigation is search engine friendly (no images, flash, javascript etc) so that all your main pages are linked there. You could choose to link to some of your latest or best articles (to show your visitors that your website is up-to-date), include some testimonials (to show you're credible). Perhaps in your footer you could include some partners (keep it natural), your privacy policy and terms of use?
Try asking your usability testers what they think should be on the main page and work from there.
-
What the Panda are we doing wrong?
Starting at June 8 of this year (the exact date of the Panda 3.7 update) the organic search engine traffic to our website dropped by about 30%. We're talking about a fairly new domain (about 8 months old) that has (or at least is suppost to have) pearly white SEO, and no outside parties have ever done any SEO for it. Organic search traffic was very stable in the weeks prior to June 8.
Organic search visits have dropped pretty much across the board (due to dropped ranking at the SERPS, as reported by our SEOmoz campaign). The (not provided) keyword has dropped 25%, while traffic from keywords related to our core products (joomla templates) have dropped almost 50%.
Knowing that June 8 saw a Panda update, I dug up some of the old Panda posts (never thought I'd need those for one of my own sites) to see what factors trigger a Panda hit. Based on the factors mentioned in this article at SEW, I'll briefly discuss what is going on at our website.
Affiliate links and ad units
Not a single affiliate link or ad unit can be found on our website.
Low-quality or thin content
Only 163 URLs from the www subdomain have been submitted in our sitemap, of which 152 are indexed. About 25 of those pages (the individual questions on our FAQ page) could in my opinion be characterized as 'thin content' pages.
Canonicalization
Every single page on our www subdomain has a rel="canonical". Given that the demo subdomain is based on Joomla, we have less control over those pages (and there will probably be some duplicate content issues there), but nothing more than any clean Joomla website would have.
Site speed
Our www subdomain receives a near-perfect 97/100 on YSlow, the demo subdomain scores a 83/100.
Quality
In the past months several popular resources (blogs, infographics) have been released that were well linked to by other (significant) players in our niche.
Social signals
Our site received about 25 +1's, several dozen (or more) tweets and a few Facebook Likes.
Search result pages
We don't have those.
Questions:
-
Can anybody spot potentially Panda-triggering issues on our website?
-
I'm aware that our link profile isn't perfect (not very bad either), but to my knowledge Panda was/is an on-page driven algorithm update, right?
-
We're also running a demo subdomain (click 'demo' in the menu), hosting there five full Joomla installations to showcase our products (just like virtually all other template providers do). This subdomain seems to also have taken a hit, but less than the www subdomain (about 15% decrease in organic search visits). Is it possible that the demo subdomain has triggered this issue (and if so, what changes would you advice)?
Any help would be greatly appreciated!
-
-
RE: Meta refresh and java script
You should change the meta refresh to a real 301 redirect for example using htaccess or PHP.
You can learn more about this topic here: http://www.seomoz.org/learn-seo/redirection
-
RE: Do images work as a H1
No. Actual text is slightly more effective than an image alt text. You might want to have a look at CSS image replacement techniques?
-
RE: How to Define Best URL Structure for Product Pages?
I personally don't think that changing your URL as you described will result in big increases in rankings on the organic search. Especially considering the work required (and the potential loss of incoming links to URLs you forget to redirect), I wouldn't recommend the change you've described.
If however, you really want to change the URLs, this is the structure I'd advice:
www.example.org/category-name/123-product-name
This allows people to cut a piece of the URL and land of your category overview page, shows them to what category a product belongs and keeps the amount of 'sub levels' to a minimum by including the id in the second level.
-
RE: What kind of pages are they?
Which 'tool' are you referring to exactly? If you don't know what these links are, most likely they are just spam bots trying to find security holes in your application, or links created by a plugin. Nothing to worry about most likely.
-
RE: Best Way to Use Date in Title
I don't think there is a better way to name a dated document than by using a form of a date.
However, you might want to consider using a different format (because you've mentioned these posts were weekly), for example "Beauty Industry News - Week 13 of 2013". Just a personal preference though, no gains or losses in the search engines there that I'm aware of.