Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • Dear, Moz community We have an issue. We have a classified advertisement website. Our website is built like this**homepage (**Optimized for main keyword, has latest listings from all categories ) - category 1 (we did not want to add alteration of the keyword we want to rank homepage, as we thought this would "compete" with homepage) - category 2 - category 3 - **category **4 The listing URLs look like this www.example.com/categoryname/listingname Now the issue is that the homepage is not ranking at all for the main keywords. When we used URL structure like this "example.com/main-keyword-listing-id" homepage was ranking (other sites).  Now with new site we used the best practice and added url's as described above (/categoryname/listingid).This caused our homepage not to rank at all for the main keywords.What did we do wrong? We want our homepage to rank for the main keyword and categories for theirs. Should we 1. Change the category 1 name to main keyword (maybe some long tail) so we have the main keyword in URLs? So at least one of the main categories has the main keyword in the listing URLs2. Should we change the category listing urls all back to /main-keyword-listing-id? We thought that this was a bit spammy, so that´s why we used categories. _This means that all listings have same URL name and not best for ranking cateogries_3. Just link back to homepage internally with the main keyword and google should catch that? _Currently in menu you go to homepage clicking HOME but we can add for example our main keyword there - Latest car advertisements _I would be happy of any feedback.

    | advertisingtech
    0

  • hi, i wonder if content on widget bar less 'seo important' than main content.. i mean, is better to place content and links on main cotent than on wordpress widget bar? What are the pros and cons? tx!

    | Dreamrealemedia
    0

  • Migrating all of our pages from HTTP to HTTPS. I am listing few of my concerns regarding the same: Currently, all HTTPS traffic to our Homepage and SEO page is 301 Redirected to HTTP equivalent. So, when we enable HTTPS on all our pages and 301 all HTTP traffic to HTTPS and stop current 301 Redirection to HTTP, will it still cause a loop during Google crawl due to old indexing? Will we move whole SEO facing site to HTTPS at once or will it be in phases? Which of the two approach is better keeping SEO in mind? what all SEO changes will be required on all pages.(eg. Canonical URLs on our website as well as affiliate websites), sitemaps etc.

    | RobinJA
    1

  • Hi everyone, I have a main page for my blepharoplasty surgical product that I want to rank.  It's a pretty in-depth summary for patients to read all about the treatment and look at before and after pictures and there's calls to action in there.  It works great and is getting lots of conversions.  But I also have a 'complete guide' PDF which is for patients who are really interested in discovering all the technicalities of their eye-lift procedure including medical research, clinical stuff and risks. Now my main page is at position 4 and the complete guide is right below it in 5. So I tried to consolidate by adding the complete guide as a download on the main page. I've looked into rel canonical but don't think it's appropriate here as they are not technically 'duplicates' because they serve different purposes. Then I thought of adding a meta noindex but was not sure whether this was the right thing to do either. My report doesn't get any clicks from the serps, people visit it from the main page. I saw in Wordpress that there's options for the link, one says 'link to media file', 'custom URL' and 'attachment'.  I've got the custom URL selected at the moment.  There's also a box for 'link rel' which i figure is where I'd put the noindex.  If that's the right thing to do, what should go in that box? Thanks.

    | Smileworks_Liverpool
    0

  • UPDATED: We ran a crawl of the old website and have a list of css and javascript links as are part of the old website content. As the website is redesigned from scratch, I don't think these old css and javascipt files are being used for anything on the new site. I've read elsewhere online that you redirect "all" content files if launching/migrating to a new site. We are debating if this is needed for css and javascript files. Examples (A) http://website.com/wp-content/themes/style.css (B) http://website.com/wp-includes/js/wp-embed.min.js?ver=4.8.1

    | CREW-MARKETING
    0

  • A client's developer moved a site onto a new (WordPress) CMS, where the only change was URLs - the front end code stayed the same. The site is 10+ years old and previously had fantastic rankings (#1-4) with inner pages for some relatively generic search phrases (eg 10,000 searches / month in the UK, per Keyword Planner). Now, on Desktop searches the site isn't appearing anywhere in the 300+ results for a key search phrase, where it used to rank between #2-4; however over the last 3 weeks on Mobile the site ranks better than before, even though the site isn't at all mobile-friendly (it's over 10 years old). During the move, there were some errors by their developer: mistakenly left in a sitewide rel=canonical tag referring to the homepage 3-4 301s before finally reaching new URLs a lot of 301s missed (250+ crawl errors appeared in Search Console) page content differentiation by parameter, instead of individual URLs For example, the page that used to rank for the targeted phrase, this left 4 different URLs indexed, with the same content. To tackle this, we have so far: put in correct rel=canonical tags set up Search Console to recognise URL parameter as differentiating content fixed all crawl errors appearing in Search Console added a link direct to the problem page, direct from the homepage stopped duplicate content being indexed (including for the page in question) ensured the page load speed is still good (< 0.75s) Ranking for Desktop over Mobile would make sense, but not Mobile over Desktop!  I'd really appreciate any advice on how to tackle this. Thanks!

    | magicdust
    0

  • Hello everyone, I am currently listing my company on business directories. For some websites however when I add my website URL, it comes up as URL is invalid. What could be the reason for this? I have tried different variations like www., http:// and https://. Kind Regards,
    Aqib

    | SMCCoachHire
    0

  • Hi Wondering if anyone can help, my site has flagged up with duplicate content on almost every page, i think this is because the person who set up the site created a lot of template pages which are using the same code but have slightly different features on. How would I go about resolving this? Would I need to recode every template page they have created?

    | Alix_SEO
    0

  • hi i got a domain with 49 DA and 36 PA and bought it through namecheap after uploading on their server with https:// after 5 days the site lost its PA and DA now it was 1 PA and 9 DA Could any one tell me what the issue behind this the links is fair and not spam but why the dropping of cause for???

    | gamajunova
    0

  • Hi All, My site is currently ranking on page 1 for the term "golfholidays" but is ranking at the bottom of page 3 for the term I am targeting and have optimised for, which is "golf holidays". Does anyone have any experience with the combined keyword ranking above the singular version? Nowhere on my page doesn't it mention the term "golfholidays" and backlinks to my site mostly use the anchor "golf holdiays" Thanks!

    | Andy9412
    0

  • Hello, we have an advertisements website. Usually in each country we have this one "main" keyword which has the largest amount of searches. Now we have done both ways in different markets. Sometimes optimize the homepage www.example.com for the keyword, sometimes optimize **www.example.com/thekeyword **the category page. I do not have solid data which works better, as Keyword difficulties are too big to compare. The good thing with category is that i would have internal links pointing to it and keyword in the url. Questions: 1. Which do you recommend better?
    2. Will there be any benefits in optimizing category vs homepage? Thanks!

    | advertisingtech
    0

  • Hi, we are trying to move all of our website content from www.mysite.com to a subdomain (i.e. content.mysite.com), and make "www.mysite.com" nothing more than an iFrame displaying the content from content.mysite.com. We have about 10 pages linking from the home page, all indexed separately, so I understand we'll have to do this for every one of them. (www.mysite.com/contact will be an iframe containing the content from content.mysite.com/contact, and we'll need to do this for every page) How do we do this so Google continues to index the content hosted at content.mysite.com with the parent page in organic results (www.mysite.com). We want all users to enter the site through www.mysite.com or www.mysite.com/xxxxxx, which will contain no content except for iFrames pulling in content from content.mysite.com. Our fear is that google will start directing users directly to content.mysite.com, rather than continue feeding to www.mysite.com. If we use www1.mysite.com or www2.mysite.com as the location of the content, instead of say content.mysite.com, would these subdomain names work better for passing credit for the iFramed content to the parent page (www.mysite.com)? Thanks! SIDE NOTE:  Before someone asks why we need to do this, the content on mysite.com ranks very well, but site has a huge bounce rate due to a poorly designed CMS serving the content. The CMS does not load the page in pieces (like most pages load), but instead presents the visitor with a 100% blank page while the page loads in the background for about 5-10 seconds, and then boom 100% of the page shows up. We've been back and forth with our CMS provider about doing something about this for 5 years now, and we have given up. We tested moving our adwords links to xyz.mysite.com, where users are immediately shown a loading indicator, with our site (www.mysite.com) behind it in an iFrame. The immediate result was resounding success... our bounce rate PLUMMETED, and the root domain www.mysite.com saw a huge boost in search results. Problem with this is our site still comes up in organic results as www.mysite.com, which does not have any kind of spinning disk loading indicator, and still has a very high bounce rate.

    | vezaus
    0

  • Moz Crawler is not able to access the robots.txt due to server error. Please advice on how to tackle the server error.

    | Shanidel
    0

  • In our blog posts the schema.org Article - Author - @type = 'thing' instead of 'person'. The name is correct. Do you think it would help develop my influencer by getting the @type set as 'person'? or not relevant? I am looking at ways to develop a Subject Matter Expert as an influencer any suggestions appreciated.

    | Anzacare
    0

  • Hello, I'm working on our site and I'm coming into an issue with the duplicate content.  Our company manufactures heavy-duty mobile lifts.  We have two main lifts.  They are the same, except for capacity.  We want to keep the format similar and the owner of the company wants each lift to have its own dedicated page. Obviously, since the layout is the same and content is similar I'm getting the duplicate content issue.  We also have a section of our accessories and a section of our parts.  Each of these sections have individual pages for the accessory/part.  Again, the pages are laid out in a similar fashion to keep the cohesiveness, and the content is different, however similar.  Meaning different terminology, part numbers, stock numbers, etc., but the overall wording is similar.  What can I do to combat these issues?  I think our ratings are dropping due to the duplicate content.

    | slecinc
    0

  • Hello Friends, Might you all are aware of the scenario when Google auto generates the snippets for search results. But nowadays I am seeing some changes like google is showing some specific words in the last of search results title for every page of my website. It looks Google is treating those words as the brand name. I have tried many things to solve this but unfortunately, nothing works for this. Does anyone see the same changes? Can anybody help me out with this or suggest me the reasons behind this.

    | Shalusingh
    1

  • Hey all, We have a new in-house built too for building content. The problem is it inserts a letter directly after the domain automatically. The content we build with these pages aren't all related, so we could end up with a bunch of urls like this: domain.com/s/some-calculator
    domain.com/s/some-infographic
    domain.com/s/some-long-form-blog-post
    domain.com/s/some-product-page Could this cause any significant issues down the line?

    | joshuaboyd
    0

  • First off, am I correct in thinking that a 'child' sitemap is a sitemap of a subfolder and everything that sits under it, i.e. www.example.com/example If so, can someone give me a good recommendation for generation a free child sitemap please? Many thanks, Rhys

    | SwanseaMedicine
    0

  • Hi guys; Is there any different between URL whit capital ASCII code and URL with small ASCII Code? For example I have 2 URLS for one page like this: 1- 332-%D8%AA%D8%AD%D8%B5%DB%8C%D9%84-%D8%AF%D8%B1-%DA%A9%D8%A7%D9%86%D8%A7%D8%AF%D8%A7.html 2- 332-%d8%aa%d8%ad%d8%b5%db%8c%d9%84-%d8%af%d8%b1-%da%a9%d8%a7%d9%86%d8%a7%d8%af%d8%a7.html both of them point to same page but no 1 is non SSL and no 2 is ssl version! and whole pges of site forces to https

    | seoiransite
    0

  • I'm currently in the process of revamping a website and creating a sitemap for it so that all pages get indexed by search engines. The site is divided into two websites that share the same root domain. The marketing site is on example.com and the application is on go.example.com. To get to go.example.com from example.com, you need to go through one of three “action pages”. The action pages are accessed from every page on example.com where we have a CTA button on the site (that’s pretty much every page). These action pages do not link back to any other page on the site though, nor are they a necessary step to navigate to other webpages. These action pages are only viewed when a user is ready to be taken to the application site. My question is, how should these pages be set up in a vertical sitemap since these three pages have a horizontal structure? Any insight would be much appreciated!

    | RallyUp
    0

  • Just thought I'd throw this question out to the Moz community. Anyone have leads for good, credible directories where I could get a smaller, local hospital listed on? (besides the obvious ones like HealthGrades, etc.,) Thanks!

    | TaylorRHawkins
    1

  • Hi There, I'm in a weird situation and I am wondering if you can help me. Here we go,  We had some of our developers implement structured data markup on our site, and they obviously did not know what they were doing. They messed up our results in the SERP big time and we wound up getting manually penalized for it. We removed those markups and got rid of that penalty (phew), however now we are still stuck with two issues. We had some pages that we changed their URLs, so the old URLs are now dead pages getting redirected to the newer version of the same old page, however, two things now happened: a) for some reason two of the old dead pages still come up in the Google SERP, even though it's over six weeks since we changed the URLs. We made sure that we aren't linking to the old version of the url anywhere from our site. b) those two old URLs are showing up in the SERP with the old spammy markup. We don't have anywhere to remove the markup from cause there are no such pages anymore so obviously there isn't this markup code anywhere anymore. We need a solution for getting the markup out of the SERP. We thought of one idea that might help - create new pages for those old URLs, and make sure that there is nothing spammy in there, and we should tell google not to index these pages - hopefully, that will get Google to de-index those pages. Is this a good idea, if yes, is there anything I should know about, or watch out for? Or do you have a better one for me? Thanks so much

    | Joseph-Green-SEO
    0

  • I was browsing Google Webmaster Tools and discovered there are 117,301 links to my site— mostly from very low-quality, spammy websites. I definitely did not solicit these links. I'm worried they are from a competitor trying to get me penalized by Google. Should I be worried about this? spam-websites.png?1505218483

    | steve_benjamins
    0

  • We have had some issues with one of our websites getting hacked. The first time it happened, we noticed it the next morning and cleaned it up before Google even realised. However, the same thing happened again over the weekend, and I came into the office to an email from Google: Google has detected that your site has been hacked by a third party who created malicious content on some of your pages. This critical issue utilizes your site’s reputation to show potential visitors unexpected or harmful content on your site or in search results. It also lowers the quality of results for Google Search users. Therefore, we have applied a manual action to your site that will warn users of hacked content when your site appears in search results. To remove this warning, clean up the hacked content, and file a reconsideration request. After we determine that your site no longer has hacked content, we will remove this manual action. _Following are one or more example URLs where we found pages that have been compromised. Review them to gain a better sense of where this hacked content appears. The list is not exhaustive. _ We have again cleaned up the website, however, my problem is that even though we have received this email, I cannot find any evidence of the manual action having actually been applied. I.e. it doesn't show in the Search Console and I am also not getting a warning in the search results when searching for our own website or clicking on the result for our website. That means I cannot submit a reconsideration request - however I am not sure at all there was actually a manual action applied at all based on my test searches. Has anyone here experienced the same issue? What do you suggest doing in this case? Thank you very much in advance for any ideas.

    | ViviCa1
    0

  • Our official brand name has dots in it and we're wondering if having those dots will hurt our organic ranking and (or) lead to a mis-interpreted crawl by the bots..

    | BoatUS
    0

  • Hey everybody, I'm looking for a bit of advice. A few weeks ago Google sent me an email saying all pages with any text input on them need to switch to https for those pages. This is no problem, I was slowly switching the site to https anyway using a 301 redirect. However, my site also has a language subfolder in the url, mysite.com/en/ mysite.com/ru/ etc. Due to poor work on my part the translations of the site haven't been updated in a long time and lots of the pages are in english even on the russian version etc. So I'm thinking of just removing this url structure and just having mysite.com My plan is to 301 all requests to https and remove the language subfolder in the url at the same time. So far the https switching hasn't changed my rankings. Am I more at risk of losing my rankings by doing this? Thanks!

    | Ruhol
    0

  • Hi SEO Masters, Google is indexing this parameter URLs - 1- xyz.com/f1/f2/page?jewelry_styles=6165-4188-4184-4192-4180-6109-4191-6110&mode=li_23&p=2&filterable_stone_shapes=4114 2- xyz.com/f1/f2/page?jewelry_styles=6165-4188-4184-4192-4180-4169-4195&mode=li_23&p=2&filterable_stone_shapes=4115&filterable_metal_types=4163 I have handled by Google parameter like this - jewelry_styles= Narrows  Let Googlebot decide mode= None  Representative URL p= Paginates  Let Googlebot decide filterable_stone_shapes= Narrows  Let Googlebot decide filterable_metal_types= Narrows  Let Googlebot decide and Canonical for both pages - xyz.com/f1/f2/page?p=2 So can you suggest me why Google indexed all related pages with this - xyz.com/f1/f2/page?p=2 But I have no issue with first page - xyz.com/f1/f2/page (with any parameter). Cononical of first page is working perfectly. Thanks
    Rajesh

    | Rajesh.Prajapati
    0

  • Hi All, I am wondering if anyone could help me decide how I should go about handling a page i plan on removing and could possibly use later on. So, a perfect example is: Let's say a company in Florida posted a page about the stores hours and possibly closing due to the incoming hurricane. Once the hurricane passes and the store is reopened, should I 404 that page since another hurricane could come after? The url for the company is www.company.com/hurricane so this is a url that we would want to use again. I guess we could just 410 and name each url www.company.com/hurricane-irma & www.company.com/hurricane-jose for each new hurricane. I am just wonder what is the best practice for a situation like this. Thanks for the help!

    | aua
    0

  • I want to put canonical tags on the homepage of a site. cant figure out the version of URL of the homepage should be with a / at the end or without the /                                  ( www.example.com of www.example.com/ ) if I put into the google the URL with /  I get the URL without the / in my browser, and it isn't showing as a redirect in my moz extension or other tools. But when I copy the URL from browser and paste elsewhere it pastes with a / I have two questions 1 - in general how does it work with URLs of homepages - I see this happening with lots of sites? 2 -  which URL should I set as the canonical version of my homepage? Thanks so much

    | Ruchy
    0

  • This is a blog revamp we are trying to personalize the experience for 2 separate audiences.We are revamping our blog the user starts on the blog that shows all stories (first screen) then can filter to a more specific blog (ESG or News blog).  The filtered version for ESG or the News blog is done through a query string in the URL.  We also swap out the page’s H1s accordingly in this process, will this impact SEO negatively?

    | lina_digital
    0

  • I've read an audit where the writer recommended creating and uploading a blank robots.txt file, there was no current file in place.  Is there any merit in having a blank robots.txt file? What is the minimum you would include in a basic robots.txt file?

    | NicDale
    0

  • One of our client's Youtube video is showing a competitor's meta data on the Google search results page? It looks like it is pulling from videos in the right-hand rail of the Youtube page. Is there any way this can be controlled/changed? If so, how? The client is Deep South Crane. When you perform a search for "Deep South Heavy Hitters", the correct video appears in the search results. However, the meta-description is pulling from a competitor's Youtube. Any insight as to why this is happening and how I can change it would be greatly appreciated. Thank you.

    | JaredBroussard
    0

  • Hey all, Bit of a novice here, so bare with me. We have a page that has a lot of content in a tabular format that struggles to rank. I created a similar page, without the tabular format, which vastly outranks it, despite having a miniscule backlink profile in comparison. Now, I've always been under the impression that anything interactive on a website, like tabs, are the result of JS. However, I can't see any JS in the code (but as mentioned, I'm far from an expert). Code below. Description insert copy here, click on [insert anchor text](/media/MODULES AVAILABLE 2017-18.xlsx)
    more copy here. Anyone able to shed any light? Cheers, Rhys

    | SwanseaMedicine
    0

  • Hello! I have a number of "Duplicate Title Errors" as my website has a long Site Title: Planit NZ: New Zealand Tours, Bus Passes & Travel Planning. Am I better off with a short title that is simply my website/business name: Planit NZ My thought was adding some keywords might help with my rankings. Thanks Matt

    | mkyhnn
    0

  • WP is doing this somehow, and creating URLs for hundreds of pages that don't exist. HOW is this happening, and how do I stop It?  I have many, many URLS like this: https://www.atouchofrust.com/terms-of-use/atouchofrust.com/vendor-news. Of note, atouchofrust.com/terms-of-use, and atouchofrust.com/vendor-news are both legit pages on the site. Why they are being concatenated is beyond my limited understanding of WP. Please, somebody, help. Cori

    | FlyingC
    0

  • I'm a geologist and Forbes contributor (https://www.forbes.com/sites/trevornace/). I also am the founder of Science Trends (http://sciencetrends.com). I recently started Science Trends about a month ago and am wondering if there is benefit or harm in putting a link in my Forbes bio for Science Trends? The link would be on every article I write on Forbes (150+ as of now). I'd like to publicize Science Trends to my Forbes readers but I don't want to jeopardize a bad link profile on Science Trends. Any suggestions/tips?

    | tnace
    0

  • Hello Experts. My ecommerce site - abcd.com
    My ecommrece site sitemap abcd.com/sitemap.xml
    My subdomain - xyz.abcd.com ( this is blank page but status is 200 which runs from cdn) My ecommerce site sitemap abcd.com/sitemap.xml contains only 1 link of subdomain sitemap- xyz.abcd.com/sitemap.xml
    And this sitemap- xyz.abcd.com/sitemap.xml contains all category and product links of abcd.com So my query is :- Above configuration is okay? In search console I will add new property - xyz.abcd.com. and add sitemap xyz.abcd.com/sitemap.xml So Google will able to give errors for my website abcd.com Purpose - I want to run my xml sitemap from cdn that's why i have created subdomain like xyz.abcd.com Hope you understood my query. Thanks!

    | micey123
    0

  • Hi everyone, I know that variations if this question have been asked on this forum and have been answered by Google also. Google's response seems to be clear that "Websites that provide content for different regions and in different languages sometimes create content that is the same or similar but available on different URLs. This is generally not a problem as long as the content is for different users in different countries." This was our approach when launching a new .co.nz website recently to coincide with us opening a new office in Auckland. Our original site is still our .com.au site. We went with a new domain name over a sub directory or sub domain for the reasons in the same Google article. After launching the NZ site in February and steadily growing some rankings, we've noticed in the last week or so a drastic drop in our keyword rankings (and traffic) for no apparent reason. There are no apparent issues in Search Console or with the Moz Site Crawl, so I'm wondering what's going on? I know rankings can fluctuate widely, especially when you're not on page 1 (which we're not) but the sudden and drastic drop did concern me. Currently, our AUS site's content is basically being replicated on the NZ site (e.g. blog posts, about us, company history, etc.). I just wanted to bounce it off you all to see whether you think it could be the "duplicate content" on the NZ site, or could it be something else? I'd really appreciate your input! Cheers, Nathan

    | reichey
    0

  • In Moz my client's site is getting loads of error messages for no follow tags on pages. This is down to the query codes on the E-commerce site so the URLs can look like this https://www.lovebombcushions.co.uk/?bskt=31d49bd1-c21a-4efa-a9d6-08322bf195af Clearly I just want the URL before the ? to be crawled but what can I do in the site to ensure that these errors for nofollow are removed? Is there something I should do in the site to fix this? In the back of my mind I'm thinking rel-conanical tag but I'm not sure. Can you help please?

    | Marketing_Optimist
    1

  • What's the correct method for tagging dupe content between country based subdomains? We have: mydomain.com // default, en-us www.mydomain.com // en-us uk.mydomain.com // uk, en-gb au.mydomain.com // australia, en-au eu.mydomain.com // europe, en-eu In the header of each we currently have rel="alternate" tags but we're still getting dupe content warnings in Moz for the "WWW" subdomain. Question 1) Are we headed in the right direction with using alternate? Or would it be better to use canonical since the languages are technically all English, just different regions. The content is pretty much the same minus currency and localization differences. Question 2) How can we solve the dupe content between WWW and the base domain, since the above isn't working. Thanks so much

    | lvdh1
    1

  • Does content rank better in a full view text layout, rather than in a clickable accordion? I read somewhere because users need to click into an accordion it may not rank as well, as it may be considered hidden on the page - is this true? accordion example: see features: https://www.workday.com/en-us/applications/student.html

    | DigitalCRO
    1

  • We have search on our site, using the URL, so we might have: example.com/location-1/service-1, or example.com/location-2/service-2. Since we're a directory we want these pages to rank. Sometimes, there are no search results for a particular location/service combo, and when that happens we show an advanced search form that lets the user choose another location, or expand the search area, or otherwise help themselves. However, that search form still appears at the URL example.com/location/service - so there are several location/service combos on our website that show that particular form, leading to duplicate content issues. We may have search results to display on these pages in the future, so we want to keep them around, and would like Google to look at them and even index them if that happens, so what's the best option here? Should we rel="canonical" the page to the example.com/search (where the search form usually resides)? Should we serve the search form page with an HTTP 404 header? Something else? I look forward to the discussion.

    | 4RS_John
    1

  • Hi everyone, I'm running a classified real estate ads site, where people can publish their apartment or house they want to sell, so we use multiple filters to help people find what they want. Lately we added multiple filters to the URL to make the search more precise, things like: Prices (priceAmount=###) Bedrooms (BedroomsNumber=2) Bathrooms (BathroomsNumber=3) TotalArea (totalArea=1_50) Services (Elevator, CommonAreas, security) Among other Filters so you see the picture, all this filters are on the URL so that people can share their search on multiple social media, that makes two problems for moz crawl: Overdynamic URLs Too long URLs Now what would be a good solution for this 2 problems, would a canonical to the original page before the "?" would be ok? Example:
    http://urbania.pe/buscar/venta-de-propiedades?bathroomsNumber=2&services=gas&commonAreas=solarium The problem I have with this solution is that I also have a pagination parameter (page=2), and I'm using prev and next tags, if I use a such canonical will break the prev and next tag? http://urbania.pe/buscar/venta-de-propiedades?bathroomsNumber=2&services=gas&commonAreas=solarium&page=2 Also thinking if adding a noindex on pages with paramters could also be an option. Thanks a lot, I'm trying to address this issues.

    | JoaoCJ
    0

  • Hi all, I have a blog (managed via WordPress) that seems to have built spammy internal links that were not created by us on our end. See "site:blog.execu-search.com" in Google search results. It seems to be a pharma-hack that's creating spammy links on our blog to random offers re: viagra, paxil, xenical, etc.  When viewing "Security Issues", GSC doesn't state that the site has been infected and it seems like the site is in good health according to Google. Will anyone be able to provide any insight on the best necessary steps to take to remove these links and to run a check on my blog to see if it is in fact infected?  Should all spammy internal links by disavowed? Here are a couple of my findings: When looking at "internal links" in GSC, I see a few mentions of these spammy links. When running a site crawl in Moz, I don't see any mention of these spammy links. The spammy links are leading to a 404 page.  However, it appears some of the cached version in Google are still displaying the page. Please lmk.  Any insight would be much appreciated.  Thanks all! Best,
    Sung

    | hdeg
    0

  • Hi There, We had developers put a lot of spammy markups in one of our websites. We tried many ways to deindex them by fixing it and requesting recrawls...  However, some of the URLs  that had these spammy markups were incorrect URLs - redirected to the right version, (ex. same URL with or without  /  at the end) so now all the regular URLs are updated and clean, however, the redirected URLs can't be found in crawls so they weren't updated, and couldn't get the spam removed. They still show up in the serp. I tried deindexing those spammed pages by making then no-index in the robot.txt file.  This seemed to  be working for about a week, and now they showed up again in the serp Can you help us get rid of these spammy urls? edit?usp=sharing

    | Ruchy
    0

  • Hi everybody, I have the following question. At the company I work, we deliver several services. We help people buy the right second hand car (technical inspections). But we also have an import-service. Because those services are so different, I want to split them on our website. So our main website is all about the technical inspections. Then, when you click on import, you go to www.example.com/import. A subpage with it's own homepage en navigation, all about the import service. It's like you have an extra website on the same domain. Does anyone has experience with this in terms of SEO? Thank you for your time! Kind regards, Robert

    | RobertvanHeerde
    0

  • On our backend system, when an image is uploaded it is saved to a repository. For example: If you upload a picture of a shark it will go to - oursite.com/uploads as shark.png When you use a picture of this shark on a blog post it will show the source as oursite.com/uploads/shark.png This repository (/uploads) is currently being indexed. Is it a good idea to index our repository? Will Google not be able to see the images if it can't crawl the repository link (we're in the process of adding alt text to all of our images ). Thanks

    | SteveDBSEO
    0

  • Hi all, I manage a website for a software company. Many terms can be quite tricky so it would be nice to add a Glossary page. Other than that, I have 2 questions: 1. What would be the SEO benefits? 2. How would you suggest to implement this glossary so we can get as much SEO benefit as possible (for example how would we link, where would we place the glossary in the terms of the sitemap, etc.). Any advice appreciated! Katarina

    | Katarina-Borovska
    2

  • Hey everyone, I just started working on a website and there are A LOT of pages that should not be crawled - probably in the thousands. Are there any SEO risks of disallowing them all at once, or should I go through systematically and take a few dozen down at a time?

    | rachelmeyer
    1

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.