Ayup - and to accomplish that means being willing/able to do some business analysis, not just site analysis. Which moves into the realm of web marketing optimization, not just SEO. Which is where the real value in this whole process lies IMO
Best posts made by ThompsonPaul
-
RE: Estimating the number of LRD I need to outrank competitor
-
RE: My pages are not listed in search results
First thing we need to be sure of is how you're doing your searching, puremobile. When you say "when I exact search a product for example" do you mean you are doing that search in your own browser?
Due to all the personalisation of search results that Google does, if you're doing that search from your own browser, it's pretty much guaranteed to give you the .ca results as you are searching from Canada. In fact, if you are doing a default search, you are probably searching from Google.ca which is even more likely to give you that .ca result. (Even using incognito browser mode, you'll still likely see this)
To get around this, you can set up Google.ca and Google.com as two different search engines to track rankings from in each of your SEOMoz campaign. Ideally, you'll want to set up a separate campaign for each version of your site (.com and .ca)
Still not 100% accurate as these rankings still don't take all the other personalisation elements into account, but will give you a better idea of the rankings for each site.
As Matt points out, you'll still want to do what you can to give the engines as much help understanding which site to serve which visitor as possible. That work will also be easier to track if you set up both sites and separate campaigns and track your efforts individually.
Hope that helps?
Paul
-
RE: Htaccess - Redirecting TAG or Category pages
The regex in your RedirectMatch doesn't say what you think it says, Jes
This part (note the bolded part of the expression (.*)
/category/Sample-Category**(.*)**
doesn't actually say "match the URL that is specifically** /category/Sample-Category"**
That**.*** is a wildcard thatmeans "and any other additional characters that might occur here"
So what it's saying is "match the URL /category/Sample-Category _**as well as **_any URLs that have any additional characters after the letter "y" in category. Which is what is catching your -1 variation of the URL (or the -size-30 in your second example).
In addition, that wildcard has been set as a variable (the fact it's in brackets), which you are then attempting to insert into the end of the new URL (with the $1), which I don't think is your intent.
Instead, try:
RedirectMatch 301 /category/Sample-Category https://OurDomain.com.au/New-Page/
you should get the redirect you're looking for, and not have it interfere with the other ones you wish to write.
Let me know if that solves the issue? Or if I've misunderstood why you were trying to include the wildcard variable?
Paul
P.S. You'll need to be very specific whether the origin or target URLs use trailing slashes - I just replicated the examples you provided.
-
RE: Throttling after about 20 requests.
Thanks for the background, Sam!
<object id="plugin0" style="position: absolute; z-index: 1000;" width="0" height="0" type="application/x-dgnria"><param name="tabId" value="ff-tab-10"> <param name="counter" value="121"></object>
-
RE: Image Alt Text: Do I Need to Link my Image for it to Count?
I'm with Francisco - whoever told you that is flat out wrong.
The alt tag in it's purest function is to provide a text description of images for vision-impaired page visitors. It is designed to be read by screen-reader software so the visually impaired can still understand what the image is and how it relates to the rest of the page. There is absolutely no reason a link would need be included in order for the alt attribute to do its job. Put another way - the alt attribute is applied to the img source tag. The link tag (href) is totally separate.
As far as what you're doing now, the link is totally extraneous and would probably annoy/confuse a user who though clicking on a linked image should actually take them somewhere else.
Unlike Francisco, I strongly suggest, unless you're linking to a larger version of the same image, that you simple include no link at all. Having it link to its attachment page in WordPress is also a very bad idea as that creates huge numbers of very thin-content pages which give no value to the visitor, cause major site-crawling problems and are worthless if they get ranked in image search.
There's even a setting in Yoast's Wordpress SEO plugin to handle this issue by redirecting the attachment page back to the original blog post the image appeared on. (Note this is not the same as what you're currently doing) Again, if Wordpress, simply set None for the link URL.
Hope that makes sense?
Paul
-
RE: Probably basic, but how to use image Title and Alt Text - and confusing advice from Moz!
That Moz help page is kinda half-right For many browsers, in the absence of a title attribute, they will display the alt text on hover instead. But if a title attribute is declared, it will be used, as you note.
Keep in mind - image title attributes are not used as ranking factors for regular search, but they are used as ranking factors for Google Image Search. So still well worth optimising them if your site benefits from image search specifically (as a good photographer's site likely would).
Paul
-
RE: How can I set up a campaign to track just directories on a specific subdomain?
This is done on the first page of Campaign Setup, Chaya. There's a radio button to select that you want to track specifically a subdomain (it's the first button) and then below, there's a text box where you enter the exact address of the subdomain. The page also has examples of typical subdomain addresses to prompt you.
I've included a screenshot with arrows pointing to the boxes you need to use.
Does that get you set up the way you need?
Paul
-
RE: Redirecting non-www to www
If you want to see the true effect of this "split" you can use Open Site Explorer to check the incoming links of each version of the URL. If there's a major difference, pick the version with the most incoming links as your primary (or canonical) version. Then redirect the secondary version to that URL. Like Jesse, I almost always find the www version is the better one to make primary.
There's no reason not to do this redirecting, and every reason to do it. You may find the rankings fluctuate a little for a day or 2 as the search engines update themselves. But if you don't do this, you essentially have your two sites competing against each other and splitting their value between them. Which means other sites will outrank you even though their "score" is lower, because your score has been split.
Google Webmaster Tools can also tell you this info very effectively, but to get it you're going to have to create a second site inside your GWT account. When you set up your existing GWT site, you used either the www.example.com or non-www version of your website. Whichever address you chose, that is the ONLY index data provided in that GWT site.
As further proof search engines consider them separate sites, the only way to check the other version of your URL is to actually create a whole new site (in the same GWT account) using the other URL version as the setup address.
When on your main dashboard, you'll see a red button in the top right corner for **Add a Site. **Use the URL that's different from the one you used the first time. Then set up and verify as normal. You can use the same verification method you used the first time -most often just using your Google Analytics account to verify is easiest, but if you uploaded a special file or are using the header snippet, those will work again too.
Once you've got the second site set up, you will be able to compare the indexing and incoming links reports to see the differences between the two versions as Google sees them.
Last benefit of setting up both sites - you can now use the GWT tools Configuration -> Settings to tell Google which version of the site is your Preferred Domain. (you can only do this properly if you have both site versions set up) Set the same version of the preferred domain in both versions of the site and you'll give Google a second indicator for which version of your domain is the primary.
Hope that makes sense?
Paul
-
RE: How do I know if I am correctly solving an uppercase url issue that may be affecting Googlebot?
It was still a good idea to create the redirects for the upper-case versions to help cut down duplicate content issues. Rel-canonical "could" have been used, but I find it's much better to actually redirect.
But that means the lower-case URLs are the canonical URLs, so ONLY they should appear in the sitemap. (Sitemaps aren't supposed to contain any URLs that redirect.) Right now, you're giving the search crawlers contradictory directives, and they don't do well with those
For additional cleanup, it would be good to have rules added to the CMS so that upper-case URL slugs cannot be created in the first place. Also run a check (can probably be done in the database) to ensure that any internal links on the site have been re-written NOT to use the uppercase URLs. there's no sense generating unnecessary redirects for URLs you control. (I suspect this is the majority of the cases that Screaming Frog is picking up.) You need to ensure all navigation and internal links are using the canonical lowercase version.
The more directly the crawlers can access the final URL, the better your indexing will be. So don't have the sitemap sending them through redirects, and don't let your site's internal links do so either.
Hope that helps?
-
RE: On Link Analysis tab I my best pages are 301 and 404 pages.
Answered in order, Stephen:
1. Normally, the fact that ecowindchimes.com/ is getting redirected is a good thing. But in this case, there's a problem with how it's been done. If you want to see where it's redirecting to, just enter that exact address in your browser's address bar and watch what happens to the address. It will change to www.ecowindchimes.com/default.asp. Done correctly, it should redirect to www.ecowindchimes.com (no default.asp extension)
This is called canonicalisation and it's designed to make sure your site's pages only get indexed under 1 URL. As far as the search engines are concerned, ecowindchimes.com/ and www.ecowindchimes.com/ are separate sites, even though the pages themselves are the same. By redirecting one to the other, you remove this problem.
The problem here is that www.ecowindchimes.com/default.asp is yet another way to refer to what is simply the home page of your site. This means that now, instead of www.ecowindchimes.com/ and www.ecowindchimes.com/ being duplicates of each other you have www.ecowindchimes.com and www.ecowindchimes.com**/default.asp** being dupes of each other. This needs to be corrected in your .htaccess file using 301 redirects.
You need to make ecowindchimes.com/ use a proper canonical redirect to www.ecowindchimes.com. This will catch ALL version of your pages that start without the www and redirect them to the correct URL with the www. (You'll need to delete the existing redirect as it's incorrect)
Then you need to 301-redirect www.ecowindchimes.com/default.asp directly to www.ecowindchimes.com/.
2. I've answered your question # 2 as fully as I can on your followup question. in short, your Volusion system is faking that 404 status as a hack to deal with the fact they don't handle 404 errors in the normal way. The page exists fine and that particular error can be ignored.
3. For your last question, this demonstrates why the hacked method for 404s being used is a problem. URLs that should be 404s will show up looking like 301 redirects instead, because of the hack in use.
To test this URL, again just put it in your address bar and watch where it gets redirected to. In this case, it's going to your 404 page (See answer to your followup question as linked above for explanation why this happening)
So, yes you need to try to fix these kinds of errors. It's saying that somewhere, there is a page linking to ecowindchimes.com/index.html?lang=en-us&target=d2.html. That is a URL from your old site before you migrated to the Volusion system. It likely means that there are pages on another web sites still linking to your old page addresses. It's also possible these are outdated links from within your own pages.
By entering that URL in the Wayback Machine I can see that it was for a page for Gracenote windchimes, so you'll want to redirect it to your Gracenotes category page.
If you want to figure out what pages those broken links are coming from, you can use Webmaster Tools 404 error report and SEOmoz's 404 error report as starting points. NOTE! You'll need to actually download the SEOmoz report as a CSV in order to see the Referrer column at the right side of the report.
Hope all that helps, more than confuses? If anything's not clear, be sure to holler.
Paul
-
RE: Lots of Pages Dropped Out of Google's Index?
I'm betting all those subcategories were indexed (unless you had specifically set them not to be, to manage dupe content). If indexed then yea, removing/redirecting them is removing pages from your site, so obviously number indexed would show a reduction. Same if you removed tags.
When you made category changes e.g. changed /updates/ to /news/ did you make sure that WordPress created a proper redirect fro the old URL? If not, you'll get all those posts 404ing which will look like fewer pages indexed as well.
When you say you removed low quality content - that could be a lot of pages right there.
Lastly, have you confirmed that your xml sitemap updated fully and correctly? If not, the crawlers could be looking for pages that no longer exist. (For example, pages that have been 301-redirected should no longer appear in the xml sitemap.
As you can see - lots of possibilities, but those are some starting points.
Paul
-
RE: Google Indexing Of Pages As HTTPS vs HTTP
That's not going to solve your problem, vikasnwu. Your immediate issue is that you have URLs in the index that are HTTPS and will cause searchers who click on them not to reach your site due to the security error warnings. The only way to fix that quickly is to get the SSL certificate and redirect to HTTP in place.
You've sent the search engines a number of very conflicting signals. Waiting while they try to work out what URLs they're supposed to use and then waiting while they reindex them is likely to cause significant traffic issues and ongoing ranking harm before the SEs figure it out for themselves. The whole point of what I recommended is it doesn't depend on the SEs figuring anything out - you will have provided directives that force them to do what you need.
Paul
-
RE: Why is OSE showing no data for this URL?
Dana, there's no content on that page.
The massive head section with all it's JavaScript is there, making it look like there's lots of code, but the actual body content has somehow been deleted.
This is all I see in the actual body of the page:
|
<form name="headerForm" action="IAFDispatcher" onsubmit="return submitQuery()" method="post">
That's it. There's no actual content, no footer, no closing or tag, which makes me think someone's actually deleted the content part of the code by accident.
Good luck figuring out who borked it
Paul
</form>
|
-
RE: Bad if Hosting Company Performs Domain Migration
The WP migration plugins I'm referring to do a rewrite of the URLs in the database. And yes, this is critical to a solid migration, instead of using redirects. There are a number of WP tools for this. My preferred tool is BackupBuddy (paid- 40% off this month) as it does an excellent job of the migration and is then a top-notch tool for managing the ongoing backing up of the site, as well as helping create a staging version of the site for future dev and maintenance purposes. I've also used the free Duplicator plugin for one-off migrations, and have used Updraft Plus on occasion as well.
The majority of the work is in tuning up the site after migration, and yes, making sure all the related functionality and tools have been updated as well.
My timeline would look something like this:
- Create addon domain in hosting cPanel for new domain and enable AutoSSL certificate - 15 mins
- Use migration plugin to move site to new domain - 1 to 1.5 hours depending on experience
- Run quality Assurance testing to insure all of site and functionality is running properly under new domain and HTTPS, including updating CDN and testing forms - 1-2 hours.
- Review and update 3rd party tools and off-site profiles - 2 hrs
- Implement final DNS changes and redirection of old domain to new, add change of address in Google Search Console - .5 hr
- Miscellaneous, including setting up backup protocol for new domain - 1 hr
- (And don't forget 3-4 hours of careful monitoring and followup for any errors over the following 4-6 weeks after migration, plus earning of new links to the new domain, and getting existing links replaced with new ones to the new domain where possible.)
For a total of about 6 or 7 hours for the migration work itself.
You're right, a clearly laid out and well-priortised project plan for this kind of migration is absolutely essential. You need to know exactly what's going to be done, and in what order, so you can insure all necessary steps are taken. To be blunt, many devs (even really good ones) don't take into account the extra details necessary in migrations like these that an experienced SEO pays attention to.
Having all the images on Amazon CDN actually simplifies the migration somewhat as those images will not have to be moved during the changeover, just have the CDN adjusted instead. The SSL should absolutely be installed on the new domain before migration - otherwise, you are going to add a lot of wasted time and complexity rewriting the database URLs a second time after the domain name change to update them to HTTPS.
Paul
-
RE: Why is OSE showing no data for this URL?
Glad I could help, Dana.
And yes, "borked" is a technical term. It's defined as existing in a badly broken state as a result of an inexperienced/inattentive user making unauthorised/incorrect changes to a website's code or content
Can also be used as a verb: "he borked the database so badly the whole site went 503".
Not that it's ever been applied to me or anything.
And yea - sometimes our tools can mislead us, even though the info they provided was "technically" correct.
Suggestion for a fast way to test the rest of the site for this kind of error: Use the paid version of Screaming Frog to program a search for a snippet of code that should be in the content area of every product page. Limit the crawl to the product pages category. (Or whatever sections of the site you're worried about.)
You could search for something as simple as class="productExtendedDescription" which would at least ensure the content container was there. Still wouldn't prove there was any content it it, but if you wanted to get fancy with regex, you could even do that too. You could also search for the tag, which would indicate that the rest of the pages' code likely exists.
Just an idea to speed up the testing process.
Paul
-
RE: .htaccess redirects
Ryan, it seems there must be a conflict in your htaccess file? If you're willing, you can PM me a copy of the full file and the site URL and I'll see if i can find anything.
Paul
-
RE: User Intent - Office Chairs & Content Writing
Yup, can do the same approach with SF. You can run it in List Mode which will let you upload the list of URLs to crawl, and you can set up an Extraction to separate out the h3s (Configuration > Custom > Extraction)
Paul
-
RE: SEOMOZ Support: Domain Name Change in SEOMOZ
This was covered in a recent question, DynoSaur. If you read the responses, you'll note that there's no way to effectively use the existing campaign once the domain has changed. Even with the 301 redirects, most of the campaign functions will no longer work.
You'll need to create a new campaign for the new domain. (You can copy-paste across your keywords etc.) Once the new campaign is live, you can archive (NOT delete) the original campaign so it's historical data can still be referenced.
You'll want to get the new campaign set up as soon as possible so that you don't miss any data. (If you have already used up your campaign slots, one will be freed up when you archive the existing campaign.)
This certainly isn't a good way for the tool to handle this kind of situation. There is already a Feature Request in place asking for the ability to change domains in a campaign. It was last updated at the end of last year saying it was a planned feature but would be at least several months before it would be implemented. Would be worth adding your vote to the request to let them know this is an issue for you too.
Not the answer you were hoping for, I'm sure, but hop this helps?
Paul
-
RE: .htaccess redirects
The php declaration is to force your server to use php version 5.3, 3Plains. It's often put in place when a site's applications require a more recent version of php than is the default on the server.
@Aleya - his htaccess had the php declaration in the middle of some of his conditionals, which I suspect was the issue. Had him move the php declaration to the top of the file before turning rewrite engine on. Seems to have resolved the issue.
(Note, the php declaration can also be placed as last line in the file. I just find it better at the top so it reminds it's there in case I have a php version issue after a future server upgrade)
Pleased we got ya working
Paul
-
RE: Can some one help how to fix spam problem. please see the attached file
This is another automated tool that is pointing out to you that currently, your website is lacking in the areas that indicate a trustworthy, authoritative website.
The way to "fix this" is to begin improving your website using the tools and suggestions made by the folks in your other question.
Paul
-
RE: 302 redirects
The SEOMoz extension for Chrome also provides this info - at the bottom of its Page Attributes tab.
Paul
P.s. Also available in the FireFox toolbar, but it doesn't' work for me.
-
RE: Image Height/Width attributes, how important are they and should a best practice site include this as std
Image h x w attributes don't affect the actual speed of your page load much, Dan. They do strongly affect the perceived speed to the user.
If the size attributes are included, the browser can leave a correctly-sized space for each image as the page gets rendered, even if the images haven't started to download yet. Then the rest of the page content flows in around the image "placeholders". (Images are always slower than text.)
If no image size attributes are present, the browser essentially ignores the placing of the images until the image files actually download, then redraws the whole page to add the space back in for the images.
This redrawing for the images means that text and other elements will move around on the page until all the images have downloaded and it has finished rendering. This gives the user an impression of a much slower page, since they can't start to read the content until it has stopped moving around. Done properly, the visitor can start reading the top of the page even while all the images lower on the page are still downloading.
So yes, obviously including height and width attributes for images is standard best practice for designing an effective on-page user experience.
Hope that helps?
Paul
P.S. As proof, Google thinks they're such a standard requirement that they have included a check for them as part of the scoring algorithm of their Google Page Speed tool.
-
RE: Massive Amount of Pages Deindexed
First thing to confirm - did you recently migrate to HTTPS?
-
RE: Duplicate Site Content found in Moz; Have a URL Parameter set in Google Webmaster Tools
The Moz crawler has no access to what you might have set in Search Console, so it can't make use of that info, Mitchell. In addition, the other search engines will have the same problem.
Fortunately, there is a mechanism specifically built for this situation that works for pretty much all search crawlers. It's the canonical tag. By adding a self referential canonical tag in the header of every page, you're telling search engines that any version of the URL that has a variable in it should be considered the same as the main (canonical) URL and pass all it's influence to the canonical URL as well. Poof - dupe content issue resolved.
Self-referential just means that the page's canonical tag uses its own "clean" URL. That way, even if a search engine crawls the version with the variable, the page header will still point to the clean version.
Your site has an additional significant canonicalisation problem. It currently can be reached and indexed under both http://www.kontrolfreek.com and also the https version at https://www.kontrolfreek.com. Search engines consider these separate sites, so you're splitting your domain authority.
Get the 301 redirects in place so that all non-https pages and resources are redirected to their https versions, then use the https URL version for the canonical tags in each page header. (It's essential that static resources like images/CSS/JavaScript etc are also using the https URLs, otherwise browsers will indicate security problems on the page as you currently have even with your https URLs)
Hope that all makes sense? If not, holler.
Paul
-
RE: $360 charged to embed 2 youtube video clips on web page with CMS system - Realistic?
Yup - afraid you've been taken advantage of (or the developer somehow invoiced the wrong project?) Embedding YouTube videos is essentially just like adding photos. A few minutes copy-paste at most.
Paul
-
RE: 301 redirection help needed!
There's no reason to keep the old hosting just for the redirects. And that introduces a real weak point as at some future date, someone who doesn't realise it's still running only an htaccess file will delete what they think is unnecessary hosting.
Instead, use DNS of the old domain to point it to the new domain and then write the redirects in the new domain's htaccess.
Paul
P.S. And yes, you'll need to write redirects for each old URL that changes. If there are patterns of URLs, it's worth using regex to cut down the number of redirects written.
-
RE: Ive been using moz for just a minute now , i used it to check my website and find quite a number of errors , unfortunately i use a wordpress website and even with the tips , is till dont know how to fix the issues.
You've got some work to do, @ Dogara. It's essential to realise that just installing SEO plugins doesn't finish the job - they must also be carefully configured. And then the pages themselves must be optimised using the information the SEO plugin provides. Think of the plugin as a tool to make the optimisations easier, not one that will do all the work for you. Here's the task list I would tackle if I were you:
First things first - make certain you have a solid current backup of your website that you know how to recover if things should go sideways.
You currently have two competing SEO plugins active - definitely not recommended. You have both Squirrly and Yoast Premium. Since Squirrly doesn't appear to be configured at all, it should be removed. (This is assuming you haven't' done any customisation work with Squirrly - as it appears to me from a quick scan through your pages, But I didn't' do an exhaustive check, so if you've done customisations in this plugin, they may need to be exported, then imported into Yoast.)
Your Yoast Premium hasn't been updated in a full year - get it updated both for security and functionality. (And get all themes and other plugins updated too if they're behind - this is the biggest thing you can do for your website's security. Did I mention you need to have a solid backup first? )
Fix your page layout templates - they are duplicating the page title and featured image.
Set up configuration of Yoast settings, configure the defaults for page:
-
Turn off the meta keywords functionality (no longer used)
-
Decide what you wish to do with all your redundant archive types that are creating a huge amount of duplicate content and bloat. My recommendations:
-
Since your site only appears to have one author, disable author archives.
-
turn off date-based archives. You're not using them anywhere that I can see, and few people are likely to search by date on the site
-
no-index the tag archives. These are straight-up massive duplicate content on your site as they are just lists of posts that are also listed elsewhere, like your categories.
-
add a couple of paragraphs of quality introductory text on each of your category pages (needs WordPress customisation to do this depending on your theme - may be doable with a plugin.) The alternative to this is to no-index your categories as well, but for a site like yours, this probably isn't recommended, since those categories are used as your primary header navigation.
-
NOTE! These recommendations are based on assumptions about how visitors use the site. If you have business reasons for keeping some of these archives, the decisions may be different!
-
write solid custom meta descriptions for your categories (assuming you are going to keep them indexed.) Currently, it is these category pages having no meta descriptions that is giving you that high error total in the Moz crawl. Do note that when you fix the meta descriptions, you may start seeing a large number of "duplicate meta description" errors listed in a new Moz crawl. This is because you have a large number of paginated pages for each category, and each will have the same meta description as the main page. This is not an issue, even though Moz may flag it, since the pages have proper pagination code in place already (re-next and rel-prev in headers). Note that Google has just this week changed the number of characters allowed in meta descriptions to be much longer - tools may not have caught up to this change yet.)
-
while you're editing the category meta description, take the opportunity to write better SEO page titles for each of them as well. They're edited in the same place as the meta descriptions in Yoast, so easy to do at the same time.
-
get the template for your homepage adjusted to include proper rel-next and rel=prev meta tags in the header so that its pagination is handled properly.
-
turn off JetPack's XML sitemap functionality and turn on the built-in sitemap tool in Yoast. You'll want to make certain only the appropriate sections of the site are in the sitemap (e.g. Any post types/taxonomies you've no-indexed, should be marked to exclude from the sitemap - like tags, author archives etc etc. You'll also need to resubmit a new sitemap address to your Google Search Console - make sure it's set up for the HTTPS address and submit the sitemap address https://hipmack.co/sitemap_index.xml
-
the "URLs too long" warning is somewhat arbitrary, but make certain you are rewriting the URLs of your new posts when you create them if they are too long (more than 4 or 5 words) I wouldn't' bother going back to change the old ones at this stage.
-
you are currently using an HTTPS Redirection plugin to manage the internal files of the site after your HTTPS migration. Would strongly recommend using a Search & Replace plugin for your database to properly rewrite these so you don't have a large number of internal redirects. Better for speed and more reliable.
-
Moz will tell you which page titles are too long and you can go into Yoast for each related page/post and rewrite them. Note that Google will still index a "too long" title, it'll just lose the end of the title when displaying it on search results pages. (So, for example, if it's just the website name getting cut off at the end, it'snot a big deal. This is also a good time to optimise the meta description for those posts as that's done in the same spot as the title are edited.
Whew! And that's just the start, but if you get those things cleaned up, you'll be well on your way to cleaning up the technical SEO of your site.
Paul
-
-
RE: Is this nofollow tag written wrong?
Yup - agree with Jesse - no functional difference.
Stylistically I always write my anchor tags with the URL first, then any attributes afterward. Just much easy to read and check when reviewing code. But browsers don't care.
Paul
-
RE: Boost if loading website under 1 sec
Nope. The use of page speed as a (minor, and very specific) ranking factor doesn't work that way. Never has.
Page speed only applies to a small percentage of search queries and it's used as a disqualifier for really slow pages (think over 15 or 20 seconds). Meaning if your page is really slow compared to your competitors, their page will likely be preferred over yours. But small incremental improvements in page speed won't result in ranking boosts. (And Google's on record as saying that even slower pages will earn SERP results if they're content is significantly better for the user's query.)
Improve page speed for the usability of your site, not because a small incremental speedup will change rankings.
Hope that helps?
Paul
-
RE: Co-occurence and synonyms
Synonyms are different words that mean the same thing - like sneakers, trainers, running shoes.
Co-occurring words/phrases are words/phrases that typically get used in connection with specific words and help search engines better understand and differentiate between words that are spelled the same but have different meanings (called homonyms).
For example: a page contains the word "apple". Is that a page about a delicious, healthy fruit, or about a massive technology corporation? The search engines will look at additional words that occur around the word in question (co-occurrence) - like pick, sauce, peel, vitamins or phone, operating system, laptop, shares - to better understand which of the two meanings of the word "apple" the page is likely to be about.
In my opinion, the best way to insure both synonyms and co-occurring words are included in your content is to write thoroughly and naturally about the topic of the page. Natural writing seldom uses the exact same phrase over and over again - good writers are always looking for ways to vary the terms they use for the same thing to keep the writing interesting and engaging (hence synonyms). And thoroughly covering a topic means the co-occurring words will naturally be part of the content.
That's why it's so easy to detect content that has been artificially written to target a specific "keyword density" for example. An attempt to artificially use certain words for the search engines harms the enjoyment of the reader, which then harms the website. The furthest I usually go to insure use of synonyms is to use basic keyword research to find the more popular synonyms for a term and keep them in mind to improve the quality/variety of the writing where appropriate.
At a basic level, you can use something like thesaurus.com to look up synonyms of your term, then put those terms into Google Keyword Planner to assess the volume and value of those synonyms and to find more related terms. Many keyword research tools offer this capability too. The challenge is that many of the synonyms may not register in the volume/competition measurements, which might mislead you into thinking they shouldn't be used.
Hope that all makes sense?
Paul
-
RE: 404 not found page appears as 200 success in Google Fetch. What to do to correct?
Unfortunately, it doesn't work that way, Alex. Unless the server returns an actual 404 response code in the header, search engines will not consider the page to be an error to be removed from their index. Even though the content of the page may look like an error page, the http response in the header is the only thing that determines whether the engines will treat it as a 404 error.
Paul
-
RE: Mobile indexing and tabs
Since this is essentially a duplicate of your earlier question, I've answered it there.
https://moz.com/community/q/text-hidden-in-tabs-on-desktop#reply_381047
-
RE: Site Crawl 4xx Errors?
You have a defective link coded for the FAQ link in your site's footer, Andrew. It's currently coded as
[Which then gets parsed as a relative link. This means the link gets parsed as www.example.com/currentpageurl/URL
Because that link is in the sitewide footer, it means every page is generating that link as a link to itself with the /URL added to the end. Which you're seeing detected as all those 404s.
This is very important to fix not only because the FAQ link is broken in the footer, but you've doubled the number of pages of the site for the search crawlers, meaning they're wasting time following thousands of useless links instead of focusing on real pages.
So fortunately it's a quick fix - find that link in your footer template and correct it.
Make sense?
Paul](URL)
-
RE: Google Webmaster tools: Sitemap.xml not processed everyday
Any chance there's been a custom crawl setting accidentally added to your Google Webmaster Tools, Robert? Some devs do this during development or it can happen accidentally.
Also, your sitmemap takes a ridiculously long time to load - well over 15 seconds for me and over 18 seconds using webpagetest.org. It could be that Google simply isn't waiting for the page to load when it tries to visit. If the sitemap's being generated dynamically, you may have a rendering problem. Otherwise there's something borked when a 50kb file takes that long.
Might also want to try submitting it through Bing Webmaster Tools and see if they are better able to index it consistently for comparison?
Bit of a head-scratcher. Hope that gives you a starting point.
Paul
-
RE: Duplicate Homepage - How to fix?
This is an incorrect implementation in the BeamUsUp tool. The hostname (the basic root URL) is a special case. Both the version with the ending slash and without the ending slash are considered by browsers and search engines to be exactly the same.
In fact, you cannot redirect one to the other. Because the browser is programmed to consider them the same, you'll create an infinite loop. So not only is there nothing you should do, there's nothing you can do.
This is the only case where this is true though! For all other internal URLs. the version with the slash is considered to be a completely different URL than the one without the slash. So unless you redirect one version to the other for internal pages, you'll have duplicate content issues.
Hope that helps.
Paul
-
RE: Unnatural Link Notification - Third Go Round, specific questions
That's a really tough spot to be in, Valery - I can sense the frustration!
As far as your last question is concerned, Matt Cutts, in the blog post clarifying the new link warnings, specifically states that you should inform them if a network is charging to remove links:
In a few situations, we have heard about directories or blog networks that won't take links down. If a website tries to charge you to put links up and to take links down, feel free to let us know about that, either in your reconsideration request or by mentioning it on our webmaster forum or in a separate spam report. We have taken action on several such sites, because they often turn out to be doing link spamming themselves.
Have you tried asking for this same kind of help in the Google Webmaster Forum? Wouldn't hurt to double your possibilities - I've heard of some folks getting direct responses there.
It's possible that there was more than just dirty links going on that added to the penalty - assume you've looked for and cleaned up all the other Black Hat techniques that might have been in use?
Sorry, no experience with the link removal services so no help there. The folks at Link Research Tools have just recently released a nearly free Link Detox tool to look at link quality - might be worth running to see if it flags anything substantial you missed.
I'm sure your patience has just about run out, but I'd stick with it a little longer, especially if the site is otherwise high-value.
Good luck, and let us know how it goes...
Paul
-
RE: Switching host
I've confirmed with my senior source at SiteGround that you absolutely can upgrade directly to a Pro account with a dedicated IP all at the same time. Just order your IP address at the same time as you order your account upgrade.
If you have any further issue with this, just message me the contact you're dealing with at SiteGround and I'll have "my guy" help straighten it out.
Good luck!
Paul
-
RE: Website structure - best tools to analyse and plan, visually
Screaming Frog also has a basic, useful site visualisation capability built into it.
-
RE: Help!!! Website won't index after taking it over from another IT Company
The presence or absence of a new/old Analytics account associated with a site will have no effect on its indexing, one way or the other, Johann.
What do you mean when you say the old site was "blacklisted"? Do you mean it had a manual penalty applied? (As would be specified in Google Webmaster Tools). Or do you mean the domain had been marked as unsafe due to malware? Or that the domain's email servers were blacklisted?
To me, (without knowing the details of the "blacklisting") if the site is being crawled but not indexed, it sounds like there may be a noindex metarobots tag or header, or some other blocking issue. Hard to give much more info unless you can give us the site's URL.
Can you fill us in on what you mean by "blacklisted"?
Paul
-
RE: Should I switch from trailing slash to no trailing slash?
Honestly? I'd spend the time to get the Custom CMS fixed to allow trailing slashes in the navigation links. That would eliminate the redirect issue, Instead of just trading it off to another set of links that would have to redirect.
It sounds like a code sanitising issue in the CMS. Worth spending a couple of hundred dollars to fix the root cause of the issue instead of spending that money to apply bandaids that cause other problems elsewhere. (And bonus, maybe you can get proper canonicalisation built at the same time.)
Of course, yea, this does depend on having/finding a competent developer and having a test environment that doesn't endanger the live site.
Any chance you could push for this option?
Paul
-
RE: How to fix doorway site
My guess is that if the domain has no backlinks, it probably has no link juice to even pass to the main site.
What kind of traffic is the doorway site getting? Is it mostly direct from type-ins due to being an EMD?
Without seeing the site, my recommendation would be to take down the doorway site altogether and 301-redirect the domain to the appropriate related category on the main site. That would still pass along the traffic, and also some of whatever minor link juice the page had managed to acquire.
Would that make sense?
Paul
-
RE: 301 redirects don't work properly
I'm assuming you've been working on this, Frans? The redirect for http://www.bankstellenshop.com//bankstellen/u ends up on page http://www.bankstellenshop.com/bankstellen/goedkope/u.html. Is that the behaviour you want?
Note that the above is actually a chain of 301 redirects. It goes
http://www.bankstellenshop.com//bankstellen/u 301 redirects to
http://www.bankstellenshop.com/bankstellen/u.html which 301 redirects to
http://www.bankstellenshop.com/bankstellen/goedkope/u.htmlIt would be cleaner, more efficient and preserve rank influence better if the middle redirect was removed and the initial URL redirected to the final in one hop. (Some "rank juice" is lost with each redirect in a chain.)
Hope that helps?
Paul
-
RE: How to Track Keywords Locally?
As Michael says, local rank tracking is subject to all the personalisation problems of regular rank tracking. But if you're willing to work within those constraints, BrightLocal's set of local seo tools will do exactly what you're looking for, plus some other really useful local functions. And really affordable, considering the features. (No affiliation, just find it a useful suite of tools)
Hope that helps?
Paul
-
RE: Keyword Density Question
I think the question you're asking is more about keyword targeting than about keyword density.
If you think like your own target visitor, when she sits down to look for a dealer who sells the make of car she's interested in, does she type in "Chrysler Jeep Dodge Ram dealer"? Of course not. She's looking for a Chrysler, so she types in Chrysler dealer. Or even more likely: "Chrysler dealer in MyTown. There are many keyword research tools (including the one here at SEOMoz ) that will tell you the other variations of the words your target visitors are likely to use and how much competition there is for those terms.
The other thing to keep in mind is that you don't target keywords to a website you target them for individual pages. Your "website" doesn't chow up for the term Chrysler dealer, it's a particular page that gets listed.
So to answer your specific question, you have multiple, related keyphrases to target for. So you need to build multiple pages, each one (and a few supporting pages) targeted to one of the phrases you identified.
So while your home page might talk about being a Chrysler Jeep Dodge Ram dealer, you should have another really strong page or section that talks all about why you're the best Chrysler Dealer in MyTown. And another page or section that specifically talks about how you're a Jeep dealer, etc. You are essentially building "mini-home-pages" or what are also know as Landing Pages for each of you main terms
A page has the best chance of ranking well if it's clearly focused on one or two keyphrases and their closely-related variations. Then you use the site architecture (how you set up the page hierarchy and links between pages) to help the search engines understand which are the most important pages and which are the supporting pages.
This is obviously just a short introduction to keyword targeting and research, but hopefully it gets you started?
Paul
{Edited to add: To answer your very specific question - Chrysler Jeep Dodge Ram dealer is one keyword. If you want to target Chrysler dealer, you MUST use that exact phrase, not just a long phrase that happens to include the target words somewhere within it. For much less competitive terms, sometimes just having them somewhere on the page (even though not together in a specific phrase) can be enough, but if you're targeting a phrase, you must use that exact phrase at least some of the time.}
-
RE: Help creating a 301 redirect in my htaccess file
Setting redirects to apply only to search crawlers is pretty much their definition of cloaking, Felip3. The site could end up in a world of hurt if/when caught.
What's the reason for wanting the crawlers to be redirected differently than users? Maybe there's a better way to accomplish what you need in a way that doesn't break their terms of service.
Paul
-
RE: Are there better & inexpensive third party website analytics software over Google Analytics?
There are so many analytics options out there that it's pretty difficult to make recommendations without knowing what it is you want to accomplish, Justin. Each will have their own very specific purposes and strengths. And weaknesses. For example most specific tools will do a very poor job of overall analytics. So they need to be run in conjunction with other tools.
What specifically do you want to know about your site that Google Analytics isn't telling you?
If you're looking specifically for heat-mapping, Crazy Egg is pretty much the industry standard. As a Pro member you can use your Pro Perks to get your first two months free.
But it's always better to have a clear plan for what and why you want to measure first. Then find a tool that delivers the data you need. Otherwise, all the new data will look very interesting but the time spent won't be justified by any improvement.
Hope that helps?
Paul
-
RE: Strange Pingback/Blog Comment Links
If that site is representative of the rest of the sites, then I'd say you might have a problem. This is going to involve a judgement call on your part, because as you say, Google has no hard and fast recommendations for these situations.
Looking at that site, the "Name" links are do-follow. Yours is one of 965 comments on the page. My guess is the site recently got itself added to someone's "do-follow comment sections to spam" list (yes there really are such lists) as all the crap links are from the last 4 months.
So to Google, this will absolutely look like the crap it is trying to hammer down with Penguin.
As for "not worrying about it until penalized" - just read through the miserable (and often futile) experiences all over the Q&A section here of site owners who have spent a year trying to dig out from under a Penguin penalty. And as for these links helping you at the moment - not bloody likely. Crappy internal pages with a PA of 1 and 965 links sure aren't passing you any ranking value.
So... how to decide what to do?
- How strong is the rest of your backlink profile and what percentage are these crappy backlinks?
- What's your link velocity for quality links compared these crap ones? (i.e. how frequently do you earn quality links compared to how often these new crap links are showing up?)
- how many of the crap links are actually do-follow?
If your own backlink profile is overwhelmingly strong, there may well be less to worry about, especially if the majority of the crap links are no-follow.The problem of course is that there are also plenty of reports of people running into issues after using the disavow tool too. So that's where the judgement call comes in.
If it were my site and I depended on it for income, the crap links were coming quickly and many were do-follow, I would collect the crap links as they come in, and once every 6 or 8 weeks, or when the links really start to mount up, I'd submit a disavow request. I'd also keep excellent records of the crap links and the quality links I'd been earning so I could respond quickly to a manual penalty should one come up.
Like I say - judgement call. Interested to know what others would do.
Paul
-
RE: Thoughts on how to speed up my site? (Other site ideas.)
First of all, just wanted to wish Noah a happy birthday!
You're right to be asking about speeding up your website, as there are a number of opportunities to improve page load times. I notice you also have questions posted asking about VPSs, and CDNs.The thing is, there is definitely an order in which these things should be tackled.
The first thing you must do is get your on page code as efficient as possible. The reality is that an inefficiently coded website can bring even the strongest of servers to its knees with just a little extra surge in traffic. It's never a good idea to try to make up for inefficient code by throwing additional hardware resources at the problem.
By default, WordPress is fairly easy to use and build, but that front-end simplicity comes at the cost of complexity and inefficiency of code on the back end. Every bit of functionality you add using plug-ins and widgets comes at the cost of additional files to load, more database calls to process, and more opportunities for someone else's inefficient code to get in the way.
In your case, your homepage is approx 2 MB in size, and some of your internal pages are over 4.5 MB. This translates into page load times anywhere from 10 to 25 seconds at typical cable Internet speeds. In practical terms, anything over 1 MB is awfully big.
Site speed optimization can be a fairly complex process, and needs an overall plan, rather than trying to implement one-off tactics and hoping for the best. I'd be happy to help you off-line to build up a plan, but in the meantime these are the kinds of things you should be taking into account:
- reduce the number of plug-ins and widgets adding content to your page to the bare minimum.
- Make sure the plug-ins/widgets you have chosen are the most efficient available in their class
- make certain photos are as efficiently compressed as possible, are being rescaled as opposed to resized in html, and be careful of using too many photos on any one page
- Ensure your caching plug-in is tweaked and tuned to be as efficient as possible for your server configuration.
Some specifics for you to think about:
- the Recommended sidebar widget (and to a lesser extent the Popular widget) are extremely inefficient in their use of images. Even though they're only showing 50x50px thumbnails, they're loading full-size images up to 550kb in the background.
- is there enough difference between Recommended and Popular for both to be of benefit to visitors?
- you're obviously using W3 Total Cache - hope you don't also have SuperCache enabled as well? If so, they'll compromise each other's performance. Only one caching plugin - ever.
- you have several different advertising networks providing ads to the page. Each one loads it's own set of backend scripts, many of which are VERY inefficient. Could you consolidate?
- you currently have over 1 megabyte of just Javascript loading on each page when target should be less than 1 MB for the whole page including content
- sidebar titles like "Like Us on FaceBook" have so little contrast from the background they're very hard to read (& likely disappear on dark monitors)
- you get so many comments on each post (a nice problem to have!) that it may be time to consider paginating the comments so that each page doesn't have hundreds of comments to load each time it displays
Overall, you're doing an absolutely kick-ass job with your website, from everything I can see. You're rocking the engagement and interest of your visitors, and you've tuned most of the things that are readily tuneable. The next level up in performance will require some tough decisions and perhaps even killing a few sacred cows. That's why I recommend you make a plan before randomly changing things.
With the way you're going, you may very well find that even the highly tuned site is going to outgrow shared hosting as a result of increasing traffic. If/when this does happen, you'll still benefit from having leaned out the website so it can take better advantage of whatever server hardware it's hosted on.
I'll close with a couple of really useful tools for visualizing exactly what's happening in the performance of your site.
First is webpagetest.org (if you haven't already found it) which allows you to run tests that directly mimic the experience a user and their browser would have. You can even have it test from different areas of the country, and using different typical Internet speeds. I usually test with the two lowest speeds, as they will cover the widest range of typical users including mobile.
The second is a plug-in called P3 Performance Profiler. It may seem counterintuitive to be adding another plug-in when what you're trying to do is reduce the complexity, but what P3 does is give you an actual readout of how much of your server's resources are being used by each of your site's plug-ins. You you can enable it for testing purposes, then turn it off again for normal running of your website.
No doubt this has created as many questions as it may have answered, so by all means fire away if you need additional clarification.
Paul