Questions created by Alces
-
Analytics / Funnel benefits to account creation at end of steps?
I'm working with a client that has a number of paths for users to take (demand vs. supply, a lot of variables, etc). All involve new URLs and are perfect for destination-based goals and thus the creation of funnels; however, right now most are irrelevant due to some issues involving redirects, changed paths, etc. In the process of trying to make sense of them, I discovered that the site forces you to make an account before proceeding through the steps in the funnel. Are there any resources that might indicate the best to approach situations like this - that is, account creation at the beginning (and as a requirement before proceeding) to creating an account at the end? I'm trying to make sense of all their potential paths, but it's impossible without making countless accounts. Thanks.
Reporting & Analytics | | Alces0 -
Best way to handle Breadcrumbs for Blog Posts in multiple categories?
The site in question uses Wordpress. They have a Resources section that is broken into two categories (A or B). Underneath each of these categories is 5 or 6 subcategories. The structure looks like this: /p/main-category-a/subcategory/blog-post-name /p/main-category-b/subcategory/blog-post-name All posts have a main category, but other posts often have multiple subcategories while some posts also fall into both main categories. What would be the easiest or most effective way to auto-populate the breadcrumb based on from where the person reached the blog post? So for example, a way to set Home -> Main Category -> Subcategory 1 as the breadcrumb if they reach it from the Subcategory 1 landing page. Or is this not possible and we should just set the breadcrumb manually based on where we feel it best lives? Thanks.
Technical SEO | | Alces0 -
Does using a canonical with ?utm_source=gmb cause any issues?
All of our URLs in Google My Business are tagged with ?utm_source=gmb. This way when people click on it within a Google Map listing, knowledge graph, etc we know it came from there. I'm assuming using a canonical on all ?_utm_source _pages (we have others, including some in the index) won't cause any problems with this, correct? Since they're not technically traditional organic SERPs? Dumb question I know, but better safe than sorry. Thanks.
Technical SEO | | Alces1 -
An immediate and long-term plan for expired Events?
Hello all, I've spent the past day scouring guides and walkthroughs and advice and Q&As regarding this (including on here), and while I'm pretty confident in my approach to this query, I wanted to crowd source some advice in case I might be way off base. I'll start by saying that Technical SEO is arguably my weakest area, so please bear with me. Anyhoozles, onto the question (and advance apologies for being vague): PROBLEM I'm working on a website that, in part, works with providers of a service to open their own programs/centers. Most programs tend to run their own events, which leads to an influx of Event pages, almost all of which are indexed. At my last count, there were approximately 800 indexed Event pages. The problem? Almost all of these have expired, leading to a little bit of index bloat. THINGS TO CONSIDER A spot check revealed that traffic for each Event occurs for about a two-to-four week period then disappears completely once the Event expires. About half of these indexed Event pages redirect to a new page. So the indexed URL will be /events/name-of-event but will redirect to /state/city/events/name-of-event. QUESTIONS I'M ASKING How do we address all these old events that provide no real value to the user? What should a future process look like to prevent this from happening? MY SOLUTION Step 1: Add a noindex to each of the currently-expired Event pages. Since some of these pages have link equity (one event had 8 unique links pointing to it), I don't want to just 404 all of them, and redirecting them doesn't seem like a good idea since one of the goals is to reduce the number of indexed pages that provide no value to users. Step 2: Remove all of the expired Event pages from the Sitemap and resubmit. This is an ongoing process due to a variety of factors, so we'd wrap this up into a complete sitemap overhaul for the client. We would also be removing the Events from the website so there are not internal links pointing to them. Step 3: Write a rule (well, have their developers write a rule) that automatically adds noindex to each Event page once it's expired. Step 4: Wait for Google to re-crawl the site and hopefully remove the expired Events from its index. Thoughts? I feel like this is the simplest way to get things done quickly while preventing future expired events from being indexed. All of this is part of a bigger project involving the overhaul of the way Events are linked to on the website (since we wouldn't be 404ing them, I would simply suggest that they be removed entirely from all navigation), but ultimately, automating the process once we get this concern cleaned up is the direction I want to go. Thanks. Eager to hear all your thoughts.
Technical SEO | | Alces0 -
Disallowing URL Parameters vs. Canonicalizing
Hi all, I have a client that has a unique search setup. So they have Region pages (/state/city). We want these indexed and are using self-referential canonicals. They also have a search function that emulates the look of the Region pages. When you search for, say, Los Angeles, the URL changes to _/search/los+angeles _and looks exactly like /ca/los-angeles. These search URLs can also have parameters (/search/los+angeles?age=over-2&time[]=part-time), which we obviously don't want indexed. Right now my concern is how best to ensure the /search pages don't get indexed and we don't get hit with duplicate content penalties. The options are this: Self-referential canonicals for the Region pages, and disallow everything after the second slash in /search/ (so the main search page is indexed) Self-referential canonicals for the Region pages, and write a rule that automatically canonicalizes all other search pages to /search. Potential Concern: /search/ URLs are created even with misspellings. Thanks!
Technical SEO | | Alces1 -
How to deal with parameter URLs as primary internal links and not canonicals? Weird situation inside...
So I have a weird situation, and I was hoping someone could help. This is for an ecommerce site. 1. Parameters are used to tie Product Detail Pages (PDP) to individual categories. This is represented in the breadcrumbs for the page and the use of a categoryid. One product can thus be included in multiple categories. 2. All of these PDPs have a canonical that does not include the parameter / categoryid. 3. With very few exceptions, the canonical URL for the PDPs are not linked to. Instead, the parameter URL is to tie it to a specific category. This is done primarily for the sake of breadcrumbs it seems. One of the big issues we've been having is the canonical URLs not being indexed for a lot of the products. In some instances, the canonicals _are _indexed alongside parameters, or just parameter URLs are indexed. It's all very...mixed up, I suppose. My theory is that the majority of canonical URLs not being linked to anywhere on the site is forcing Google to put preference on the internal link instead. My problem? **I have no idea what to recommend to the client (who will not change the parameter setup). ** One of our Technical SEOs recommended we "Use cookies instead of parameters to assign breadcrumbs based on how the PDP is accessed." I have no experience this. So....yeah. Any thoughts? Suggestions? Thanks in advance.
Intermediate & Advanced SEO | | Alces0 -
Ecommerce store on subdomain - danger of keyword cannibalization?
Hi all, Scenario: Ecommerce website selling a food product has their store on a subdomain (store.website.com). A GOOD chunk of the URLs - primarily parameters - are blocked in Robots.txt. When I search for the products, the main domain ranks almost exclusively, while the store only ranks on deeper SERPs (several pages deep). In the end, only one variation of the product is listed on the main domain (ex: Original Flavor 1oz 24 count), while the store itself obviously has all of them (most of which are blocked by Robots.txt). Can anyone shed a little bit of insight into best practices here? The platform for the store is Shopify if that helps. My suggestion at this point is to recommend they all crawling in the subdomain Robots.txt and canonicalize the parameter pages. As for keywords, my main concern is cannibalization, or rather forcing visitors to take extra steps to get to the store on the subdomain because hardly any of the subdomain pages rank. In a perfect world, they'd have everything on their main domain and no silly subdomain. Thanks!
Intermediate & Advanced SEO | | Alces0 -
JSON-LD schema markup for a category landing page
I'm working on some schema for a client and have a question regarding the use of schema for a high-level category page. This page is merely the main lander for Categories. For example: https://www.examples.com/pages/categories And all it does is list links to the three main categories (Men's, Women's, Kid's) - it's a clothing store. This is the code I have right now. In short, simply using type @Itemlist and an array that uses @ListItem. Structured Data Testing Tool returns no errors with it, but my main question is this: Is this the _correct _way to do a page like this, or are there better options? Thanks.
Intermediate & Advanced SEO | | Alces0 -
Combine poorly ranking pages into a single page?
I'm doing on-page optimizations for an apartment management company, and they have about seven apartments listed on their site. Rather than include everything on the same page - /apartments/apartment-name/ - they have the following setup: /apartments/apartment-name/contact /apartments/apartment-name/features /apartments/apartment-name/availability /apartments/apartment-name/gallery /apartments/apartment-name/neighborhood With very few exceptions, none of these pages appear to rank for anything, and those that do either rank very poorly for seemingly random keywords or for keywords like the apartment complex name (alongside the main landing page for the complex). I'm of the mind to recommend combining the pages into a single one that contains all the info, eliminates the chances for duplicate content (all of the neighborhood pages contain the same content verbatim), and prevents keyword cannibalization. Thoughts? Thanks.
On-Page Optimization | | Alces1 -
Redirect indexed lightbox URLs?
Hello all, So I'm doing some technical SEO work on a client website and wanted to crowdsource some thoughts and suggestions. Without giving away the website name, here is the situation: The website has a dedicated /resources/ page. The bulk of the Resources are industry definitions, all encapsulated in colored boxes. When you click on the box, the definition opens in a lightbox with its own unique URL (Ex: /resources/?resource=augmented-reality). The information for these colored lightbox definitions is pulled from a normal resources page (Ex: /resources/augmented-reality/). Both of these URLs are indexed, leading to a lot of duplicate indexed content. How would you approach this? **Things to Consider: ** -Website is built on Wordpress with a custom theme.
Technical SEO | | Alces
-I have no idea how to even find settings for the lightbox (will be asking the client today).
-Right now my thought is to simply disallow the lightbox URL in robots.txt and hope Google will stop crawling and eventually drop from the index.
-I've considered adding the main resource page canonical to the lightbox URL, but it appears to be dynamically created and thus there is no place to access (outside of the FTP, I imagine?). I'm most rusty with stuff like this, so figured I'd appeal to the masses for some assistance. Thanks! -Brad0 -
Over 500 thin URLs indexed from dynamically created pages (for lightboxes)
I have a client who has a resources section. This section is primarily devoted to definitions of terms in the industry. These definitions appear in colored boxes that, when you click on them, turn into a lightbox with their own unique URL. Example URL: /resources/?resource=dlna The information for these lightboxes is pulled from a standard page: /resources/dlna. Both are indexed, resulting in over 500 indexed pages that are either a simple lightbox or a full page with very minimal content. My question is this: Should they be de-indexed? Another option I'm knocking around is working with the client to create Skyscraper pages, but this is obviously a massive undertaking given how many they have. Would appreciate your thoughts. Thanks.
Technical SEO | | Alces0