This all began when I was asked to develop experiment parameters for our content protocol & strategy. It should be simple right? I've reviewed A/B testing tips for days now, from Moz and other sources.I'm totally amped and ready to begin testing in Google Analytics.
Say we have a restoration service franchise with over 40 franchises we perform SEO for. They are all over the US.
- Every franchise has their own local website. Example restorationcompanylosangeles.com
- Every franchise purchases territories in which they want to rank in. Some service over 100 cities.
- Most franchises also have PPC campaigns. As a part of our strategy we incorporate the location reach data from Adwords to focus on their high reach locations first.
- We have 'power pages' which include 5 high reach branch preferences (areas in which the owners prefer to target) and 5 non branch preference high reach locations.
- We are working heavily on our National brand presence & working with PR and local news companies to build relationships for natural backlinks.
- We are developing a strategy for social media for national brand outlets and local outlets.
- We are using major aggregators to distribute our local citation for our branch offices. We make sure all NAP is consistent across all citations.
- We are partners with Google so we work with them on new branches that are developing to create their Google listings (MyBusiness & G+).
- We use local business schema markup for all pages.
- Our content protocol encompasses all the needed onsite optimization tactics; meta, titles, schema, placement of keywords, semantic Q&A & internal linking strategies etc.
- Our leads are calls and form submissions. We use several call tracking services to monitor calls, caller's location etc. We are testing Callrail to start monitoring landing pages and keywords that generating our leads.
Parts that I want to change:
- Some of the local sites have over 100 pages targeted for 'water damage + city ' aka what Moz would call "Doorway pages. "
- These pages have 600-1000 words all talking about services we provide. Although our writers (4 of them) manipulate them in a way so that they aren't duplicate pages.
- They add about 100 words about the city location. This is the only unique variable.
- We pump out about 10 new local pages a month per site - so yes - over 300 local pages a month.
- Traffic to the local sites is very scarce.
- Content protocol / strategy is only tested based on ranking! We have a tool that monitors ranking on all domains. This does not count for mobile, local, nor user based preference searching like Google Now.
- My team is deeply attached to basing our metrics solely on ranking. The logic behind this is that if there is no local city page existing for a targeted location, there is less likelihood of ranking for that location. If you are not seen then you will not get traffic nor leads. Ranking for power locations is poor - while less competitive low reach locations rank ok.
- We are updating content protocol by tweaking small things (multiple variants at a time). They will check ranking everyday for about a week to determine whether that experiment was a success or not.
What I need:
- Internal duplicate content analyzer - to prove that writing over 400 pages a month about water damage + city IS duplicate content.
- Unique content for 'Power pages' - I know based on dozens of chats here on the community and in MOZ blogs that we can only truly create quality content for 5-10 pages. Meaning we need to narrow down what locations are most important to us and beef them up.
- Creating blog content for non 'power' locations.
- Develop new experiment protocol based on metrics like traffic, impressions, bounce rate landing page analysis, domain authority etc.
- Dig deeper into call metrics and their sources.
Now I am at a roadblock because I cannot develop valid content experimenting parameters based on ranking. I know that a/b testing requires testing two pages that are same except the one variable. We'd either non index these or canonicalize.. both are not in favor of testing ranking for the same term.
Questions:
- Are all these local pages duplicate content?
- Is there a such thing as content experiments based solely on ranking?
- Any other suggestions for this scenario?