Looks like that, or some approximation thereof has you sorted. I would just like to add that you should keep an eye on Webmaster Tools.
![Travis_Bailey Travis_Bailey](/community/q/assets/uploads/profile/65963-profileavatar-1619581168340.png)
Posts made by Travis_Bailey
-
RE: Blog subdomain not redirecting
-
RE: Blog subdomain not redirecting
I'm hesitant to say; "Do X." because I'm not really sure what will happen - with the redirect plugin in the mix. I imagine a lot, if not all of the subdomain folders and pages have already been redirected via the plugin. So I imagine the path of least disaster at the moment is just redirecting the subdomain (sub.domain.com) to the main domain (www.domain.com) alone.
I could be totally wrong, but this one is weird.
Test out the rule and then push live. Here is the code to redirect just the subdomain to just the www domain:
RewriteCond %{HTTP_HOST} ^blog.domain.com$ [NC]
RewriteCond %{REQUEST_URI} ^/?$
RewriteRule .* http://www.domain.com [R=301,L]Double check it, triple check it and then push live. Keep a very close eye on it. I really hope we don't end up with a loop.
-
RE: Blog subdomain not redirecting
This particular situation won't sort itself out. There's a sub involved and I suspect it's a rewrite rule that shouldn't be there. The developer appears to be somewhat sophisticated as they're using X-FRAME-OPTIONS in a way that doesn't allow iFrames to work outside of the domain.
So who knows what goodies await in .htaccess.
-
RE: Blog subdomain not redirecting
Okay, here's what I got:
The plugin supposedly operates independently of .htaccess. So taking that at face value, I don't think you're going to get what you need out of the plugin.
I would imagine the .htaccess file is much the same as it was when the site launched, or when it was last modified by the developer. So that file is likely going to need editing to achieve what you need. However, that file isn't something you just want to play with in a live environment.
And it's not something anyone in their right mind would blindly say; "Yeah just copy and paste this rule!"
I would talk to Dale and see if he has a block of free time coming up.
-
RE: Blog subdomain not redirecting
You mentioned in the above thread that you're using a redirection plugin. What is it's name? Beyond that Yoast and All in One both allow you to edit htaccess entries. (I despise that feature, btw.)
-
RE: Blog subdomain not redirecting
I'm going to guess that you have something that looks like this in your .htaccess file:
RewriteRule ^blog/$ http://blog.website.com [L,NC,R=301]WARNING
You can knock your site down with the slightest syntax error when you mess with the htaccess file. Proceed with caution.
Let us know what you find.
-
RE: Strange keyword showing in GA
It was kind of humorous, at first. It's now showing as returning organic traffic. Direct link to screencap from the original post:
-
RE: Analytics Spammer Can Haz Humors: Who Else?
Thanks, but it's just some entrance keyword data. Kind of funny at that. Sure, the spam can increase. But I can set a segment as well as any other. I have extra profiles for filtering fun, if things get 'pucker worthy'.
-
Analytics Spammer Can Haz Humors: Who Else?
Yeah... the traffic is annoying. But seeing this (attached image) in my GA was pretty funny. Obviously it was only a couple hits. I thought it was pretty cute in comparison to the other actions.
-
RE: Google's Mobile Update: What We Know So Far (Updated 3/25)
Well, it looks like they went dark. No response to email or calls. I even developed a child theme for them.
Ah well, now to contact their competitors. I'm not doing that out of spite, rather I found an interesting situation. I would very much like to see how something like that changes things.
A few CSS tweaks, a banner redesign, and I can have my case again. Fortunately there aren't any contractual obligations involved with the first instance.
-
RE: Google's Mobile Update: What We Know So Far (Updated 3/25)
I'm in the middle of a freebie deal for a mom & pop, simply because all of their local competitors aren't 'mobile friendly'. They aren't in the most affluent area, and rely on foot traffic, so it's safe to say that mobile results are critical. Hopefully I can get everything launched soon.
I'm not promising them the moon. I've made my motivations clear. It's a bit of my own curiosity mixed with the warm fuzzies I get from turning the 'little guy' into a beast among 'little guys'.
I'm certain there are measurable ways to be 'mobile friendlier than thou'. I just don't think it would be terribly ethical to knowingly hold back with a 'live subject'. Ah well, I'm sure there's something in there for IMEC.
-
RE: What the hell that iframe is doing?
I was thinking about the iframe passing juice. Apparently that was 'confirmed' a few years ago, but I never messed with it. It seemed kind of silly to rely on it, since it could be so easily detected. I don't know if the tactic has been 'disproved' and they're spinning their wheels.
I wonder what would happen if they fixed the non-www DNS failure?
Other than that, I think I see 'slow drip' link building.
But given that the home page seems to do better than any of the other pages I've seen, it may be safe to say the iframe tactic might work a little. The site's overall visibility appears to have steadily increased over the last four or five years.
Another possibility, and I wouldn't doubt they've tried it, is CTR bots.
I guess we should look into other domains that refer to |-| /\ppy flow eye tea.
-
RE: What the hell that iframe is doing?
I got distracted from this thread. I see the iframe pages. I have a hunch, but I'm not ready to render an opinion.
It's hilarious that they actually styled one of the tables as 'linkfarm'. SMH
-
RE: What the hell that iframe is doing?
I've learned one thing especially and that is: Don't try to learn Italian from a tire website. XD
I didn't find an instance of an iframe. There is a reference to iframe in the CSS, but no style is in place for an iframe. Though they do use a lot of JQuery, however.
Fun thing I learned today: noscript can be crawled and rendered. Just check the cache. The only thing that's actually cached in the corpo are the contents of the noscript tag. Weird, but apparently possible.
But if there's one thing I do know, at least at this moment, it's that a lot of vendita gomme aren't held to the highest standard. Also, this site's conversion rate will continue sucking eggs - as long as they require someone to create an account to purchase.
Otherwise the site just loads fast as hell, even in the US, and it's keyword stuffed to the Nth from the src up. In sum, I need to learn Italian and sell tires.
-
RE: What the hell that iframe is doing?
Would you be comfortable with PMing the competitor URL via Moz? I'm not interested in taking a client. I'm interested in what's happening. Moz is my witness.
Have you found evidence/considered the possibility that they're redirecting domains to the target domain? It's basically like running on quicksand, but it can be successful for a while. Just like any light switch tactic.
-
RE: Is it better to stick with a generic LocalBusiness Schema Itemtype for a particular type of business or should you get more specific?
Not a problem. Though one thing did kind of strike me as odd, and that's the white #fff maps link. Then again, I don't really know what color the background(s) will be. If the backgrounds are white, I would err toward the side of caution and use a contrasting style. Just sayin'.
Besides, there's markup for map links which you can incorporate. More on that here. You can play around with the markup until you have something that satisfies your needs and validates, via the structured data testing tool.
A note on the testing tool:
It's so user friendly, it's not readily apparent that you can just click on the first window and paste your src.
Once you get comfortable with that, there are a lot of other ways you can use the markup.
-
RE: Is it better to stick with a generic LocalBusiness Schema Itemtype for a particular type of business or should you get more specific?
It's a good idea to be as specific as possible. A lot of people tend to over-think the whole deal. Every major search engine, of consequence in the Western hemisphere, has endorsed this form of structured data. If Schema provides a particular type, by all means use it.
The first snippet definitely checks. Your competitors aren't likely using it. The search engines have admitted they're a little slow on the uptake, and you put in the time. You have some pretty good contact info/NAP boiler plate right there.
All the thumbs. First snippet.
-
RE: NAP: Best practices on your website?
Hi Nathan,
I second Richard's statement. It's a very good idea to build out individual landing pages with contact information. It's even better to include Schema markup, at least for the NAP. And lucky you! There's markup that relates directly to a furniture store as a local business.
To get started with the markup, you'll want to look at the code examples at the bottom of the local business page. That should give you a solid idea how you should structure the markup. Then you can see if your markup checks prior to pushing live via the Google Structured Data Testing Tool.
Here's another reference via Moz in regard to on-site local considerations.
-
RE: Does alt tag optimization benefit search rankings (not image search) at all?
As Ryan touched on, it's always been a usability/accessibility concern. Since sites can get a little boost from other UI/UX concerns, it would stand to reason that alt text is still something to consider. As far as I know, it's never been the holy grail - but a nod to usability and/or accessibility can't hurt.
Alt text should be somewhat terse, though descriptive of the content. Otherwise, how can one justify the bandwidth alone?
-
RE: Traffic from organic grew significantly. But why?
Hi Evelien,
Does organic traffic appear to be attributable to any particular country or countries? This may sound strange, but I wonder if a competitor pulled out of the market. It appears just about every competitor you have in google.be got a pretty nice organic increase around that time, which has continued. Kruidvat seems to have the lion's share now.
Last I knew, Luxembourg was something of a tax shelter. With the recent changes in VAT, I wonder if a significant competitor or competitors found it difficult to continue operations. But that's just a 'shot in the dark' on my part.
-
RE: How valuable is a link with a DA 82 but a PA of 1?
I would generally dispense with the concern over metrics, considering the source. It sounds like a great citation source, regardless. Plus it may do what links were intended to do in the first place: Drive Traffic
OSE, aHrefs, Majestic and the like are just keyhole views into what's really going on. Albeit important keyhole views, but still limited insights into the big picture.
I would challenge that if one focuses less on granular metrics, and puts more attention into traffic and general relevancy; one would be happier with the results and have more time for generating similar results.
-
RE: Pages are Indexed but not Cached by Google. Why?
Good to hear you may be getting closer to the root of the problem. Apologies that it took so long to get back to you here. I had 'things'.
I followed the steps and you should be able to determine the outcome. Spoiler Alert: No block, this time.
It's a whole other can of worms, but should you need more human testing on the cheap; you may find Mechanical Turk attractive. One could probably get a couple hundred participants for under a couple hundred dollars, with a task comparable to the one above.
Just a thought...
-
RE: Does Disavowing Links Negate Anchor Text, or Just Negates Link Juice
In your position, I would want to know more about what I'm getting into as well. Before I have a contract, I would like to know what they've been doing over the last three years. There's a lot of time there where, potential, previous actions could help or hinder your efforts.
- Did they disavow?
- What did they (or a contractor) disavow, if anything?
- If they 'performed a disavow', where is the file? (There's a possibility it wasn't properly formatted, or it may not have been submitted.)
- Have they sent out link removal requests?
- If so, what were the results?
- Did they continue building low quality links after the fact? (History is a factor.)
- If so, for how long?
- Have they tried a reconsideration request after a, what you would deem sufficient, disavow/removal effort? (Though it may walk and quack like an algo/filter penalty, it could be manual.)
The above would be a few of my primary concerns before I started looking at anchor text ratios. If you've already covered those bases, good on you. Just let it be known, to everyone's general disinterest, that I said as much.
You may find that a lot of the heavy lifting is already done, but the execution was flawed at some critical point. Which may free resources toward building a better internet and generally making your client giddy. Easy peasy, right?
I agree with Ryan's second paragraph. Definitely under-promise and attempt to over-deliver. I haven't seen many sites that didn't have at least a chance at recovery, if money were no object. However, there are sites where it would be wise to start over from an economic perspective. (Time/Opportunity Cost+Actual Money)
It's that nearly three year long penalty that would give me pause, prior to jumping in. Again with the ratios, if there's been a disavow and you don't have the file; you're not looking at anything remotely accurate - until you go through the same process. Still, no one ever has the entire picture. It's various shades of confidence in what you can gather about the situation.
There. I made it two paragraphs without emoting. I can go play video games now.
-
RE: Pages are Indexed but not Cached by Google. Why?
I can't really argue with log files, in most instances. Unfortunately, I didn't export crawl data. I used to irrationally horde that stuff, until I woke up one day and realized one of my drives was crammed full of spreadsheets I will never use again.
There may be some 'crawlability' issues, beyond the aggressive blocking practices. Though I managed to crawl 400+ URI before timeouts, after I throttled the crawl rate back the next day. Screaming Frog is very impressive, but Googlebot it ain't, even though it performs roughly the same function. Though, given enough RAM, it won't balk at magnitudes greater than the 400 or so URIs. (I've seen... things... ) And with default settings, Screaming Frog can easily handle tens of thousands of URI before it hits it's default RAM allocation limit.
It's more than likely worth your while to purchase an annual license at ~$150. That way, you get all the bells and whistles - though there is a stripped-down free version. There are other crawlers out there, but this one is the bee's knees. Plus you can run all kinds of theoretical crawl scenarios.
But moving along to the actual blocking, barring the crawler, I could foresee a number of legit use scenarios that would be comparable to my previous sessions. Planning night out > Pal sends link to site via whatever > Distracted by IM > Lose session in a sea of tabs > Search Google > Find Site > Phone call > Not Again... > Remember domain name > Blocked
Anyway, I just wanted to be sure that my IP isn't white listed, just unblocked. I could mess around all night trying to replicate it, without the crawling, just to find I 'could do no wrong'. XD
Otherwise it looks like this thread has become a contention of heuristics. I'm not trying to gang up on you here, but I would err on the side of plenty. Apt competition is difficult to overcome in obscurity. : )
-
RE: Pages are Indexed but not Cached by Google. Why?
I'll PM my public IP through Moz. I don't really have any issue with that. Oddly enough, I'm still blocked though.
I thought an okay, though slightly annoying, middle ground would be to give me a chance to prove that I'm not a bot. It seems cases like mine may be few and far between, but it happened.
It turns out that our lovely friends at The Googles just released a new version of reCAPTCHA. It's a one-click-prove-you're-not-a-bot-buddy-okay-i-will-friend-who-you-calling-friend-buddy bot check. (One click - and a user can prove they aren't a bot - without super annoying squiggle interpretation and entry.)
I don't speak fluent developer, but there are PHP code snippets hosted on this GitHub repo. From the the documentation, it looks like you can fire the widget when you need to. So if it works like I think it could work, you can have a little breathing room to figure out the possible session problem.
I've also rethought the whole carpenter/mason career path. After much searches on the Yahoos, I think they may require me to go outside. That just isn't going to work.
-
RE: Pages are Indexed but not Cached by Google. Why?
Rest assured, that I don't scrape/hammer so hard that it would knock your site down for a period. I often throttle it back to 1 thread and two URI per second. If I forget to configure it, the default is 5 threads at two URI per second. So yeah, maybe a bit of the moz effect.
Chrome Incognito Settings:
Just the typical/vanilla/default incognito settings. It should accept cookies, but they generally wouldn't persist after the session ends.
I didn't receive a message regarding cookies prior to the block notification.
On a side note, I don't allow plugins/extensions while using incognito.
Fun w/ Screaming Frog:
It's hard to say if the 8.5 hour later instance was my instance of Screaming Frog. The IP address would probably tell you the traffic came out of San Antonio, if it was mine. I didn't record the IP at the time, but I remember that much about it. Otherwise it's back in the pool.
Normally Screaming Frog would display notifications, but in this instance the connection just timed out for requested URLs. It didn't appear to be a connectivity issue on my end, so... yeah...
Fun w/ Scraping and/or Spoofing:
Screaming Frog will crawl CSS and JS links in source code. I found it a little odd that it didn't.
I also ran the domain through the Google Page Speed tool for giggles, since it would be traffic from Googlebot. It failed to fetch the resources necessary to run the test. Though cached versions of pages seemed to render fine, with the exception of broken images in some cases. Though I think that may have something to do with the lazy load script in indexinit.js, but I didn't do much more than read the code comments there.
In regard to the settings for the crawler, I had it set to allow cookies. The user agent was googlebot, but it wouldn't have came from the typical IPs. Basically just trying to get around the user agent and cookie problem with an IP that hadn't been blocked. You know, quick - dirty - and likely stupid.
Fun w/ Meta Robots Directives:
A few of the pages that had noindex directives appeared to lack genuine content, in line with the purpose of the site. So I left that avenue alone and figured it was intentional. The noarchive directive should prevent a cache link. I was just wondering if one or more somehow made into the mix, for added zest. Apparently not.
While I'm running off in an almost totally unrelated direction, I thought this was interesting. Apparently Bingbot can be cheeky at times.
Fun w/ The OP:
It looks like Ryan had your answer, and now you have an entirely new potential problem which is interesting. I think I'm just going to take up masonry and carpentry. Feel free to come along if you're interested.
-
RE: Pages are Indexed but not Cached by Google. Why?
No worries, I'm not frustrated at all.
I usually take my first couple passes at a site in Chrome Incognito. I had sent a request via Screaming Frog. I didn't spoof the user agent, or set it to allow cookies. So that may have been 'suspicious' enough from one IP in a short amount of time. You can easily find the screaming frog user agent in your logs.
Every once in a while I'll manage to be incorrect about something I should have known. The robots.txt file isn't necessarily improperly configured. It's just not how I would have handled it. The googlebot, at least, would ignore the directive since there isn't any path specified. A bad bot doesn't necessarily obey robots.txt directives, so I would only disallow all user agents from the few files and directories I don't want crawled by legit bots. I would then block any bad bots at the server level.
But for some reason I had it in my head that robots.txt worked something like a filter, where the scary wildcard and slash trump previous instructions. So, I was wrong about that - and now I finally deserve my ice cream. How I went this long without knowing otherwise is beyond me. At least a couple productive things came out of it... which is why I'm here.
So while I'm totally screwing up, I figured I would ask when the page was first published/submitted to search engines. So, when did that happen?
Since I'm glutton for punishment, I also grabbed another IP and proceeded to spoof googlebot. Even though my crawler managed to scrape meta data from 60+ pages before the IP was blocked, it never managed to crawl the CSS or JavaScript. That's a little odd to me.
I also noticed some noindex meta tags, which isn't terrible, but could a noarchive directive have made it into the head of one or more pages? Just thought about that after the fact. Anyway, I think it's time to go back to sleep.
-
RE: Pages are Indexed but not Cached by Google. Why?
For starters, the robots.txt file is blocking all search engine bots. Secondly, I was just taking a look at the live site and I received a message that stated something like; "This IP has been blocked for today due to activity similar to bots." I had only visited two or three pages and the cached home page.
Suffice to say, you need to remove the User-agent: * Disallow: / directive from robots.txt and find a better way to handle potentially malicious bots. Otherwise, you're going to have a bad time.
My guess is the robots.txt file was pushed from dev to production and no one edited it. As for the IP blocking script, I'm Paul and that's between y'all. But either fix or remove it. You also don't necessarily want blank/useless robots.txt directives either. Only block those files and directories you need to block.
Best of luck.
Here's your current robots.txt entries:
User-agent: googlebot Disallow: User-agent: bingbot Disallow: User-agent: rogerbot Disallow: User-agent: sitelock Disallow: User-agent: Yahoo! Disallow: User-agent: msnbot Disallow: User-agent: Facebook Disallow: User-agent: hubspot Disallow: User-agent: metatagrobot Disallow: User-agent: * Disallow: /
-
RE: 700+ Genuine likes in 4 Days on a new Site - Does it risk Google Spam?
Google+ is a social medium. The largest recorded changes came from Google+ at that time. A few months later, Matt Cutts announced that it may penalize Google+ for passing PR and gathering too much PR. So the most impressive parts of the KISS Metrics study were correct.
I hate to be a 'Cuttlet' or speak in terms of Page Rank, but there we have it. A social media platform could influence Page Rank at one time. It still may do so to a limited degree.
There's no doubt that the affect of a social post can bring about the effect of organic rankings, via links outside of typical social media.
People just have to care, to some degree - for some reason.
Knowing this is why I can afford the fine Chunky Soup for lunch. XD
-
RE: 700+ Genuine likes in 4 Days on a new Site - Does it risk Google Spam?
Let's get in an internet argument!!!
(That was a serious(ly fun) internet smile right there, bro.)
Social media can result in exposure that can result in lovely links outside of social media.
It's less of a stretch to say that organic links, outside of social media, as a result of social media exposure increase rankings.
The people at KISS Metric weren't incorrect at all. Their measurements aren't doubted. I don't doubt them.
But there are some, pretty critical, things that remain unsaid there.
-
RE: 700+ Genuine likes in 4 Days on a new Site - Does it risk Google Spam?
Hi Ashish,
Our beloved Matt Cutts has, fairly recently, stated that social signals aren't part of the algorithm. Social signals aren't used for or against a site, regardless of their veracity. As unlikely as it sounds, Google just doesn't have the capability to accurately gauge social signals in a timely and accurate manner - or so we are told.
It sounds like you made a solid tactical decision in regard to how one should gain exposure. Congratulations, you made something good. How you iterate upon your recent success is up to you, but it sounds like you'll do just fine.
-
RE: Multiple websites for different service areas/business functions?
Are we talking ACME Haberdashery and ACME Cobblers, or is this more of an ACME Plumbing and ACME Drain Cleaning situation? I gather that we're talking about local businesses, which comes with it's own bit of fun. I'm just trying to gauge if the difference in service offerings merit the effort.
-
RE: I have a very successful law firm that has lost its natural rankings. Looking for the best SEO consultant available to bring my rankings back. I really the need the best person out there and cost is not an issue.
Hi David,
This isn't really the forum for hiring consultants. As stated, you can go with the recommended list from Moz. I would much rather discuss some of the issues in the forum, out in the open. It's more fun that way.
Just from looking at a couple of the 19 or so domain names registered throughout the years, the situation looks like a bit of a hot mess. Best of luck.
-
RE: Pull meta descriptions from a website that isn't live anymore
I would do it one better and crawl from a local web server, just to be sure. But in all reality, a password protected directory is probably more accessible, in this instance.
-
RE: Changing Your Company's Name
I'll go through these in order.
#1-#4 Not unless you're advertising a different domain name via collateral or other channels.
In regard to #3 specifically, that should be an easy SQL find and replace (e.g. Find smithjonespllc.com and replace with smithjonesdoepllc.com) if you have to change the name. Then get ready for a ton of NAP cleanup.
#5 This may be an interesting opportunity to measure your offline efforts. Perhaps a clone of the site could be blocked via robots.txt and meta robots. Though you're better off spending most of your efforts online anyway. It's just a thought.
#6 I've worked with a number of firms and partners come and go. So it's a question of permanence/vanity in many instances. Though it's entirely possible to make a page for the partner and run a local campaign for each partner. You'll more than likely get more mileage that way, and have less NAP cleanup issues.
My aunt is the President of her firm, though all of the properties only have the founding partner's names. The firm is over 100 years old and they're doing fine.
-
RE: No Index Meta
Odds are the errant meta robots tag is in header.php. You should be able to find the file in the back end under Appearance/Editor. With that said, be very careful - make a backup if you must 'cowboy code'- then work with a text editor (Sublime, Komodo, Notepad++, Notepad, etc.) to copy and paste your work.
-
RE: Newly Acquired Website--Questions on Changing Permalink Structure
There are many times where I would take something similar to Scott's tact. I agree as well, he mentioned some good general practices. But this site has been in the wild for a bit and you just acquired it.
This means there's a good bit of homework to do before overhauling the entire permalink structure, for what I suspect is a considerable number of pages/posts. Knowing nothing else, the safest opinion I can render is to keep the structure as is for existing pages/posts (current traffic/link/revenue considerations) - and consider a custom post type and custom permalink workaround ('Sperimenting! Woooo!) for fun and profit(?). Your developer, or the WP development community, may have a better solution.
Though I generally tend to disagree with doing things simply because the competition is doing the same. If nothing else, the mentality tends to bleed over into everything. Just think of all the wasted time/opportunities. Some differentiation can be a good thing.
But if the custom permalink URLs tend to out-perform the slightly-lesser-than-pretty URLs over time, you can do a cost/benefit of total URL permalink change. (Rewrites... rewrites as far as the eye can see... buzzlightyear.jpg) But yeah, you'll probably have to confront the previous posts/pages permalinks at some time. I'm just saying confront the issue with some primary data, with the aid of secondary data.
-
RE: Newly Acquired Website--Questions on Changing Permalink Structure
I'll start with: Leave the existing URLs alone!!!
With the current permalink structure, you're possibly getting a slight page load speed boost. Apparently it's easier for WP to query and return a URL with numbers. I don't really understand how the speed boost happens, since everything has a numeric ID# anyway until the URLs become Pretty, so yeah.... there's that.
If we had our druthers, you and I, everything would be page-name post-name. It looks cleaner, it's easier to read and remember. Though in sports, information is time sensitive. The date in the URL would help a slightly savvier user know if you're talking about a game won in this season, rather than last season from the SERP.
This may be a little bit of a workaround, and I'm not a WP developer (I can muddle along and play until I get in trouble.) but you can make a custom post type with a custom permalink. That way, the old URLs stay the same and anything new can be done the way you seem fit. I'm not sure about your level of comfort with the guts of WP, but here's something you can repurpose or show to a dev: Custom Post Type - Custom Permalink Tut
Best of luck.
-
RE: Webmaster Tools Verification Problem
And now you can get to what you were trying to do in the first place.
If I had a dollar...
-
RE: Webmaster Tools Verification Problem
That's odd that neither method works. There are still a few more methods you can try. You can find them here. If you could update how that works for you, that would be appreciated.
Whenever I see things like this, I always wonder how often people get fired for system problems beyond their control. At any rate, once you re-verify your account, the historic data should still be there. So no worries on that part.
-
RE: Robots.txt
You may be better off just doing a pattern match if your CMS generates a lot of junk URLs. You could save yourself a lot of time and heartache with the following:
User-agent: *
Disallow: /*?That will block everything with with a ? in the string. So yeah, use with caution - as always.
If you're quite certain you want to block access to the image sizes subdirectory you may use:
User-agent: *
Disallow: /sizes*/
More on all of that fun from Google and SEO Book.
Robots.txt is almost as unforgiving as .htaccess, especially once you start pattern matching. Make sure to test everything thoroughly before you push to a live environment. For serious. You have been warned.
Google WMT and Bing WMT also provide parameter handling tools. Once you tell Bing and/or Google that you want their bots to ignore urls with certain parameter(s) you select. So if you wanted to handle it that way, it looks like ignoring the app= parameter should do the trick for most of your expressed concerns.
Good luck! explosions in the distance XD
-
RE: Google Local Business SEO
You don't have to make a new listing for each location, though I recommend making one for each location. Even if they're in the same city, you can differentiate the listings with a descriptor. See here for more on that. It's pretty straight forward.
I'm not terribly familiar with search trends in Australia, but I'm fairly sure you would want the province in the title tag. Province (abbreviation) in the title tag would definitely be better for a local business. So your title tag may look something like Blue Widgets Gold Coast Queensland | Business Name.
The above would likely be the home page title for a business that sells blue widgets in Gold Coast. Now say the business also does Blue Widget Repair as a service. Your service page would look something like **Blue Widget Repair Gold Coast Queensland | Business Name. **
Now if you have another location, in another city - you would want a page with a title of Blue Widgets City Province | Business Name. Going a little further, you could use this page as the link on your business listings. Trust me, it's good stuff.
-
RE: Making unresponsive site responsive, should I expect any ranking penalties?
Why not just adopt WordPress? ; ) I generally don't work with the .Net platform in the wild, unless I'm prospecting. You may or may not believe some of the crazy things that platform does out of the box. So make sure to get a full crawl of the site and sort out any issues prior to dev work. I recommend Screaming Frog in most instances. But if you have a gigantic site and/or you may not be able to identify problems readily, you may find Deep Crawl worth the price.
At any rate, just make sure you have a pre and post launch crawl for comparison. That alone can save you hours of time, should something go a awry. If nothing else the crawls will help a consultant, should you need one in the future.
Since you're making major site changes, it's also a good idea to get some site speed benchmarks. (Get benchmarks under various traffic loads, if possible.) It's possible that you can end up with a slower site, even though it's responsive. There are a number of ways that can happen, but at least you'll know if you have a speed problem.
I recommend GTmetrix and Pingdom for the above tasks. Here are some really simple fixes from Feed The Bot that should help with speed, once you're on an Apache server.
Just remember, redesigns are a great time to catch any loose ends you 'didn't have time for' or were 'minor problems'. Those minor problems stack up to considerable wins, once they're righted. You're on the right path with a responsive redesign. Remember to preserve the source order of the site between versions (desktop, mobile, tablet) as much as possible. That will help with the speed as well.
-
RE: W3C Validation: How Important is This to Ranking
Seconding EGOL's statement, for the most part.
Years ago, The Matt Cutts stated W3C valid code wasn't a ranking factor. There's been a bit of debate over the years, but there still isn't much evidence to support W3C validation itself as a ranking factor. So it's something you probably can put on the back burner for more pressing concerns.
Honestly, sometimes errors are flagged simply because a comment or two are a little wonky. But that won't really inhibit how competitive a site is. If the site has 'quite a few' errors and warnings, that could potentially decrease site speed. Site speed is a ranking factor.
I suppose my best answer is; "No, it's not a ranking factor itself. Though there's some potential for poor coding to harm something that is a ranking factor."
-
RE: Pros or Cons of adding Schema Markup via HTML or through Webmaster Data Highlighter
My apologies, it's been a while on this thread. Just tested what, and it doesn't what? What site? What would be awesome?
-
RE: Awful ranking after site redesign
When a page is a 404, The Googles will come back to it in an undisclosed period of time. This is in order to make sure the page is really gone. Now if the pages that are gone used to receive referral traffic, it would be super handy to get those pages up soon, forget about the search engines. That way, you're recovering links and pages for the right reasons.
What should be your first order of questioning is if those pages were worth anything to begin with. I can rank a site for 'left handed profession city st', overnight. It doesn't mean any of that is going to work for the client.
But if they didn't redirect any of the old pages to their new, relevant, equivalents - I highly doubt they took the time to block those pages via robots.txt. If they did, wow. I'll leave it at that.
The increase of indexed pages could be due to any number of things. Perhaps a site search function is misconfigured? Perhaps the site uses tags in a way I wouldn't recommend? Perhaps the CMS, if there is one, is prone to duplicate content.
That's pretty much the best I can do without a specific example. Anyone with more 'skeelz' than I would be guessing as well. But thanks much for your question.
-
RE: Why I'm I ranking so low on Google Maps
You're welcome.
In regard to Schema, you'll probably be ahead of most contractors in the Montgomery area in adoption. It's been around for a few years, all major search engines endorse it's usage. It makes their job easier, so there are some perks.
You can go nuts with Schema markup. Fax, hours of business, logo, reviews and your second cousin's brother... well almost.
Though you will need to edit source code to implement the markup. You can get away with copying and pasting my first example (Though I think this editor trimmed off the word 'Map'.), once you get there with the Weebly WYSIWYG.
This is more of a 'nice to have' in regard to the site's blog; maybe add a little bit of text describing what's happening in the images. Sites get found in ways we never targeted. Mixing up the media a bit helps a lot.
-
RE: Why I'm I ranking so low on Google Maps
Dang it, the WYSIWYG stripped out the code. That feature is wonky... so... here goes....
Example: Filled Out
Guyette Roofing and Construction
1849 Upper Wetumpka Rd
Montgomery,
AL
36107
Phone: 334-279-8326
URL of MapExample: Blankish
,
Phone:
<a href="" itemprop="maps">URL of Map</a> -
RE: Why I'm I ranking so low on Google Maps
First, in regard to 'After' on http://www.guyetteroofing.com/blog/montgomery-roof-115. That weird little split looks a lot better than the crazy cobbled psuedo-valley they had going on. I've done some roofing in the past, as a home owner and a starving student (Local job boards - between 15 credit hours - it helps if you can do construction). That job was a big improvement. I would imagine the ridge vent will add a bit of life to the job and make summers a little more bearable.
I've worked with quite a few commercial and residential contractors in the DFW area. There was a common theme that I noticed that I like to call 'Contractor's Syndrome'. Usually I would run into 'Name Roofing and Construction', 'Name Construction', 'Name Contracting', 'Name Contractors' and a few other variants. If the business had been around for more than a few years, the NAP cleanup was usually pretty involved.
I think this is the case here. There are a lot of citations for Guyettes Contracting LLC including the BBB listing. All in all, I picked up Guyette's Roofing and Construction, Guyette Roofing and Guyette's Contracting. The last being the most prominent. So it's safe to say there are actually a lot NAP inconsistencies happening.
There are a lot of great local citations for Guyettes Contracting, so if I had to do it myself and run a business - I would probably err towards using that. The site seems to be doing okayish in organic for three months old. So just make sure that you're properly categorized in your local listings.
I noticed that you have another domain, which is owned by Hibu. If it's not doing anything for you, shut it down and ask them to transfer the domain to you. I've seen domain transfer requests go both ways with Hibu, but I wasn't handling the admin stuff at those times.
As an on-site consideration, I would recommend using Schema markup on at least your contact page. I noticed you're using Weebly, so I'm uncertain of your level of skill with site editing. I'll post a couple of snippets after this, one filled out with Guyette's Roofing - and one that's blank-ish. That way you'll have an example, should you go with a different name.
First Example: Filled Out
Just note that Schema markup isn't cruise control for local/organic rankings. It's just a nifty way to spoon feed search engines and possibly get some nice snippets. Hopefully that will help some.