Skip to content

Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO

Discuss site health, structure, and other technical SEO strategies.


  • I have a pretty big directory site using Wordpress with lots of "locations", "features", "listing-category" etc.... Duplicate Content: https://www.thecbd.co/location/california/ https://www.thecbd.co/location/canada/ referring URL is www.thecbd.co is it a matter of just putting a canonical URL on each location, or just on the main page? Would this be the correct code to put: on the main page? Thanks Everyone!

    | kay_nguyen
    0

  • Hi, Th situation is: I've cloned my homepage & added new content to the cloned page I've then updated the WordPress settings to make the cloned page the new homepage Will I lose PA as the new cloned page is in effect a new article?

    | jasongmcmahon
    0

  • Invalid Microdata - How much of an impact does invalid microdata have on SERPS?? The Low down. We are located in Australia We run our business on the Bigcommerce platform. Problem is Google is crawling our bigcommerce in USD and displaying our micro data (price in USD instead of AUD)   How much of a problem is this in terms of SEO issues?  We have seen a steady decline or many of our top 3 rankings shift down a few pegs to mid-bottom of top 10.    We're also getting google shopping microdata warnings too. Hi, I am just wondering how we fix invalid micro data (Price) is displaying USD where we are located in Australia so it should be AUD. Solutions: Does anyone have a solution for this they can help me out with to resolve this microdata issue on the bigcommerce platform (stencil cornerstone based template)? Are there any other technical elements at first glance you note on our website that may be a potential cause in the SERP decline from top 3's to top 10's?   URL https://wwww.fishingtackleshop.com.au

    | oceanstorm
    0

  • I am a firm believer in the fundamentals of SEO but is there any data to support its impact positively or negatively towards a sites rank?

    | Brandonp
    0

  • Hey guys, I have a bizarre situation on my hands. I have a URL that is being wonky. The url is redirecting to another url and the 301 redirect is not in my htaccess. There is a 301 redirect in my htaccess but is being overwritten by something else, i.e. whatever is happening in above. So basically URL A should be redirecting to URL B but instead its going to URL C. I know we were not hacked, it's not redirecting to a strange bizarre domain. I have also disabled all of our plugins that redirect (to my knowledge) Any thoughts would be great!

    | HashtagHustler
    0

  • I have about 25 301 redirects in my Wordpress htaccess file, that look like this: <code>Redirect301/store/index.html https://www.notesinspanish.com/store-home/</code> At the moment they are at the bottom of my htaccess file, below the usual Wordpress rewrite rules: <code># BEGIN WordPress <ifmodulemod_rewrite.c>RewriteEngine On RewriteBase / RewriteRule ^index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] # END WordPress</ifmodulemod_rewrite.c></code> So they are below all that. Above my WP rewrite rules I have a number of other rules from plugins (caching, ssl). Are my 301's OK where they are at the very bottom of that file? They are working, and redircting pages correctly. Should they be somewhere else? Many thanks for any help. Thanks for any help.

    | Benspain
    0

  • how do I write a 301 redirect in the htaccess file so that http:// goes straight to https://www. Moz replyEli profileHey there!Thanks for reaching out to us!

    | VelocityWebsites
    0

  • Hi All, We are experiencing issues with pages that have been 404'd being indexed. Originally, these were /wp-content/ index pages, that were included in Google's index. Once I realized this, I added in a directive into our htaccess to 404 all of these pages - as there were hundreds. I tried to let Google crawl and remove these pages naturally but after a few months I used the URL removal tool to remove them manually. However, Google seems to be continually re/indexing these pages, even after they have been manually requested for removal in search console. Do you have suggestions? They all respond to 404's. Thanks

    | Tom3_15
    1

  • I have one client with two domains, identical products to appear on both domains. How should I handle this?

    | Hazel_Key
    0

  • I have identified a large number of very high spam score links to "free wallpaper" coming into my site.
    I am running a wordpress blog and would like some advice on the best course of action. There are thousands of spam domains linking to various images on my site with the anchor text "get free high quality hd wallpaper" The webmasters for these domains are not contactable so I am planning to submit a disavow file to google. I am aware these links have negatively affected my DA so would like to do more to remove them. My questions are: will deleting the images they link to help?
    As this is on a wordpress site deleting the images will result in a soft 404, should I force a hard 404 to properly break the link?
    Will this positively improve my DA?

    | beckygh
    1

  • Hello, We are working on a new site. The idea of the site is to have an ecommerce shop, but the homepage will be a content page, basically a blog page.
    My developer wants to have the blog (home) page on a subdomain, so blog.example.com, because it will be easier to make a nice content page this way, and the the rest of the site will just be on the root domain (example.com). I'm just worried that this will be bad for our SEO efforts. I've always thought it was better to use a sub folder rather than a subdomain. If we get links to the content on the subdomain, will the link juice flow to the shop, on the root domain? What are your thoughts?

    | pinder325
    0

  • Hi there, You might have experienced this before but for me this is the first. A client of mine moved from domain A (www.domainA.com) to domain B (www.domainB.com). 301 redirects are all in place for over a year. But the old domain is still showing in Google when you search for "site:domainA.com" The HTTP Header check shows this result for the URL https://www.domainA.com/company/cookie-policy.aspx HTTP/1.1 301 Moved Permanently => 
    Cache-Control => private
    Content-Length => 174
    Content-Type => text/html; charset=utf-8
    Location => https://www.domain_B_.com/legal/cookie-policy
    Server => Microsoft-IIS/10.0
    X-AspNetMvc-Version => 5.2
    X-AspNet-Version => 4.0.30319
    X-Powered-By => ASP.NET
    Date => Fri, 15 Mar 2019 12:01:33 GMT
    Connection => close Does the redirect look wrong? The change of address request was made on Google Console when the website was moved over a year ago. Edit: Checked the domainA.com on bing and it seems that its not indexed, and replaced with domainB.com, which is the right. Just Google is indexing the old domain! Please let me know your thoughts on why this is happening. Best,

    | iQi
    0

  • Hello, Moz community! I need to access the old search console in order to submit a change of address. I used to be able to switch from a toggle on the main menu, but I can't seem to find that anymore. Does anyone have any ideas on how I can access it?

    | eddiewang25
    0

  • Hi there - I have a client that says they'll be "serving content by retrieving it from another URL using loadHTMLFile, performing some manipulations on it, and then pushing the result to the page using saveHTML()." Just wondering what the SEO implications of this will be. Will search engines be able to crawl the retrieved content? Is there a downside (I'm assuming we'll have some duplicate content issues)? Thanks for the help!!

    | NetStrategies
    1

  • Hello,For example if I search for “ Bike Tours in France” I am looking for a page with a list of tours in France.Does it mean that if my html doesn’t have list * in the code but only that apparently doesn’t have any semantic meaning for a search engine my page won’t rank because of that ?Example on this page : https://bit.ly/2C6hGUn According to W3schools: "A semantic element clearly describes its meaning to both the browser and the developer. Examples of non-semantic elements: <div> and - Tells nothing about its content. Examples of semanticelements: <form>, , and- Clearly defines its content."Has anyone any experience with something similar ?Thank you, </form>

    | seoanalytics
    0

  • Due to Poor unsightly look of breadcrumbs and the space it takes up above the fold we only employ breadcrumbs on our desktop version. Breadcrumbs are hidden from view on mobile version. However as mobile first indexing is now in play what technical SEO impacts will this have?    one thing that comes to mind is crawling deeper pages where breadcrumbs made them accessible in less than 3 link clicks?  But i am unsure now of the impacts of not having breadcrumbs visible for mobile version of our site.

    | oceanstorm
    0

  • According to Google Search Console, my pages are being crawled by not indexed.  We use Shopify and about two weeks ago I selected that Traffic from all our domains redirects to our primary domain.  So everything from www.url.com and https://url.com and so on, would all redirect to one url.  Have added an attached image from Search Console. 6fzEQg8

    | HariOmHemp
    0

  • We have a number of historical domain names that we are thinking of 301 redirecting to industry relevant domains.
    Currently the domains we wish to redirect are not active and have been down since march 2018.
    As far as we know there is no bad reputation on these domains, but we think there are still links out there in the wild on possibly relevant blog posts. Would there be any negative affect on the target domain? Thanks

    | barry.oneil
    0

  • Dear My Niche site attacked by malware on 1 st march 2018. Hacker inject a php file on my blogpage. Injected link like: mydomain.com/blog/dmy4xa.php? Then I scan My site by wordfence. Identifying all malware code.Then  manually clean whole site with database. My site is completely free from malware. and remove all malware link from webmaster tools. Even Block my blog page by robots.txt . But new malware link index every week. So i need to remove those link every week.  So this issue I decided to rebuild my site. Finally I rebuild my site another server. Then I flash my current server and migrate my site from those server on 10th january 2019 . I wait 1 month to deindex malware link. But new link are indexing every week. I discourage site for over 1 week and even delete site from google webmaster tools with all properties as well as verification file from server. Over 1 week , Link are showing. I feel boar to delete malware link every week. I need permanent solution. Please give me a perfect solution for this malware link index. Google index about 100 url .After that I clean my site with some tools.  My site was free from malware. But Ne

    | Gfound123
    0

  • We have a store with thousands of active items and thousands of sold items. Each product is unique so only one of each. All products are pinned and pushed online ... and then they sell and we have a product page for a sold item. All products are keyword researched and often can rank well for longtail keywords Would you :- 1. delete the page and let it 404 (we will get thousands) 2. See if the page has a decent PA, incoming links and traffic and if so redirect to a RELEVANT category page ? ~(again there will be thousands) 3. Re use the page for another product - for example a sold ruby ring gets replaces with ta new ruby ring and we use that same page /url for the new item. Gemma

    | acsilver
    0

  • At Magnet.me we track the items people are viewing in order to optimize our recommendations. As such we fire POST requests back to our backends every few seconds when enough user initiated actions have happened (think about scrolling for example). In order to eliminate bots from distorting statistics we ignore their values serverside. Based on some internal logging, we see that Googlebot is also performing these POST requests in its javascript crawling. In a 7 day period, that amounts to around 800k POST requests. As we are ignoring that data anyhow, and it is quite a number, we considered reducing this for bots. Though, we had several questions about this:
    1. Do these requests count towards crawl budgets?
    2. If they do, and we'd want to prevent this from happening: what would be the preferred option? Either preventing the request in the frontend code, or blocking the request using a robots.txt line? The latter question is given by the fact that a in-app block for the request could lead to different behaviour for users and bots, and may be Google could penalize that as cloaking. The latter is slightly less convenient from a development perspective, as all logic is spread throughout the application. I'm aware one should not cloak, or makes pages appear differently to search engine crawlers. However these requests do not change anything in the pages behaviour, and purely send some anonymous data so we can improve future recommendations.

    | rogier_slag
    0

  • I have 5 URLs that are "missing titles" however, all 5 are landing pages that were created in Pardot. how would I go about adding the missing title? Would I need to add it on our website platform or in Pardot?

    | cbriggs
    0

  • Hi there, I just made a crawl of the website of one of my clients with the crawl tool from moz. I have 2900 403 errors and there is only 140 pages on the website. I will give an exemple of what the crawl error gives me. | http://www.mysite.com/en/www.mysite.com/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/en/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/en/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/en/en/en/en/en/en/en/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/en/en/en/en/en/en/en/en/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/index.html#?lang=en | http://www.mysite.com/en/www.mysite.com/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/en/index.html#?lang=en | | | | | | | | | | There are 2900 pages like this. I have tried visiting the pages and they work, but they are only html pages without CSS. Can you guys help me to see what the problems is. We have experienced huge drops in traffic since Septembre.

    | H.M.N.
    0

  • We have an old website with an old domain that has not been maintained for a few years; it now has a DA of 14 and a spam score of 24%. Our current domain (same business) within a few years has a domain authority of 21  and page authority 29 spam score 1%. (Yes, this domain should have been redirected from the get-go)  The question is what do you do with it now? Toss it. Or redirect it. That domain has existed for years but still not sure what its value is from an SEO perspective. I would love to hear your feedback. Is there any benefit to redirect the old domain to the current domain. Or is it a negative and what impact?

    | MyBambooSEO
    0

  • I have been getting conflicting advice on the best way to implement schema for the following scenario. There is a central e-commerce store that is registered to it's own unique address which is "head office". There are a few physical shops each of which has their own location and address. Each shop has its own landing page within /our-stores/. So each page on the website has the Organisation schema for the central 'organisation', something like: Then on each physical store landing page is something like the following as well as the Organisation schema: Is this correct? If it is should I extend LocalBusiness with store URL and sameAs for GMB listing and maybe Companies House registration? It's also been suggested that we should use LocalBusiness for the head office of the company, then Departmentwith the typeStore.  But i'm not sure on that?

    | MickEdwards
    0

  • I hope I am explaining this correctly. If I need to provide any clarity please feel free to ask. We currently use a domain mask on an external platform that points back to our site. We are a non-profit and the external site allows users to create peer-to peer fundraisers that benefit our ministry. Currently we get many meta issues related to this site as well as broken links when fundraisers expire etc. We do not have a need to rank for the information from this site. Is there a way to index these pages so that they are not a part of the search engine site crawls as it relates to our site?

    | SamaritansPurse
    0

  • what does optimal use of keywords in header tag actually mean given you indicate this as hurting seo factor?

    | Serg155
    0

  • Multilingual links in the footer section is being counted as backlink and we are getting tons of backlinks from all the 7 lingual websites. Is there a solution where we eliminate these links and still having the option to navigate to other lingual pages? vr24NAv

    | comfortclick
    0

  • Hi there, I'm working on a website where the webmaster redirected a bunch of parent pages but left the URLs of the children pages as is. For example: domain.com**/business/** got 301 redirected to domain.com**/personal** But they left behind all the children pages of /business/ as is like: domain.com/business/trips domain.com/business/travel instead of changing these to: domain.com/personal/trips domain.com/personal/travel Is there an SEO implication to the children pages_ /business/trips_ and /business/travel having a redirecting parent? Will it affect robots crawling or link equity passing?

    | NikkiHernandez
    0

  • We are launching a new website and switching to WP 5.0 Gutenberg.  Are there any issues we should be aware of related to SEO with the new platform?

    | AegisLiving
    0

  • We a media site, www.hope1032.com.au  that publishes daily content on the WordPress platform using the Yoast SEO plugin. We allow smaller media sites to republish some of our content with canonical field using our URL. We have discovered some of our content is now ranking below Or not visible on some search engines when searching for the article heading. Any thoughts as to why? Have we got an SEO proble? An interesting point is the small amount of content we have republished is not ranking against the original author on search engines.

    | Hope-Media
    0

  • When  I run a wordpress blog through the structured data testing tool I see that there is @type hentry. Is this enough for blogs etc? Is this a result of Wordpress adding in this markup? Do you recommend adding @blogposting type and if so why? What benefit to add a specific type of schema? How does it help in blogging? Thanks

    | AL123al
    4

  • Due to the restraints of category page layout many of the products in certain categories have the product titles truncated, in some cases missing off 2-5 words depending on the product in question.  The product name which displays on the category page is lifted straight from the product page itself, so not possible to do something like "product name including spec..."  to place ... to indicate a bit more. I'm assuming not but just wanted to check that Google will not frown on this.  Text is not being hidden it just does not render fully in the restricted space.  So there is a scenario of 'bits of' text in the source not displaying on the rendered page.

    | MickEdwards
    0

  • hi, From the past some month i  am facing the problem in indexing backlinks, please share the method to index backlink in google fast

    | vijay23
    1

  • There's a domain name (we will call it A) with no domain authority that is currently forwarded to a domain with 36 DA (we will call this domain B). B has been dormant for about two years. I am getting both domains, but domain A works better for what I will be using it for. So basically, I want to swap things around so B forwards to A, instead of A forwarding to B. Any dangers here or things to consider that I may be overlooking?

    | CWBFriedman
    0

  • Hi there, In October, one of our customer's programmer made a change on their website to optimize its loading speed. Since then, the all the SEO's metrics has dropped. Apparently, the change was to move to CloudFlare and to add Gzip compression. I was talking with the programmer and he told me he had no idea why that happened. Now comes 5 months later and the SEO metrics havn't come back yet. What seems so wierd is that two keywords in particular had the most massive drop. Those two keywords were the top keywords (more than 1k of impressions a month) and now its like there is no impressions or clics at all. Did anyone had the same event occur to them? Do you have any idea what could help this case?

    | H.M.N.
    0

  • Hi, I am working on a large global site which has around 9 different language variations. We have setup the hreflang tags and referenced the corresponding content as follows: (We have not implemented a version X-default reference, as we felt it was not necessary) Using DeepCrawl and Search Console, we can see that these language variations are causing duplicate title issues. Many of them. My assumption was that the hreflang would have alleviated this issue and informed Google what is going on, however i wanted to see if anyone has any experience with this kind of thing before. It would be good to understand what the best practice approach is to deal with the problem. Is it even an issue at all, or just the tools being over-sensitive? Thank you in advance.

    | NickG-123
    0

  • Hi, I've seen a fair amount of topics speaking about the difference between domain names ending with or without trailing slashes, the impact on crawlers and how it behaves with canonical links.
    However, it sticks to domain names only.
    What about subfolders and pages then? How does it behaves with those? Say I've a site structured like this:
    https://www.domain.com
    https://www.domain.com/page1 And for each of my pages, I've an automatic canonical link ending with a slash.
    Eg. rel="canonical" href="https://www.domain.com/page1/" /> for the above page. SEM Rush flags this as a canonical error. But is it exactly?
    Are all my canonical links wrong because of that slash? And as subsidiary question, both domain.com/page1 and domain.com/page1/ are accessible. Is it this a mistake or it doesn't make any difference (I've read that those are considered different pages)? Thanks!
    G

    | GhillC
    0

  • I have a client with a blog setup on their domain (example: blog.clientwebsite.com) and even though it loads at that subdomain it's actually a Wordpress-hosted blog. If I attempt to add a plugin like Yoast SEO, I get the attached error message. Their technical team says this is a brick wall for them and they don't want to change how the blog is hosted. So my question is... on a subdomain blog like this... if I can't control what is in the sitemap with a plugin and can't manually add a sitemap because the content is being pulled from a Wordpress-hosted install, what can I do to control what is in the index? I can't add an SEO plugin... I can't add a custom sitemap... I can't add a robots.txt file... The blog is setup with domain mapping so the content isn't actually there. What can I do to avoid tags, categories, author pages, archive pages and other useless content ending up in the search engines? 7Zo93b2.png

    | ShawnW
    0

  • At Magnet.me we are using Intercom to communicate with our users. This means that we are actively adding javascript code which will load the Intercom javascript on each page, and render the button afterwards. However, this button has no value for crawlers, and slows the page down as the javascript is big and fairly slow. Therefore I considered to ship some code which disables this button, such that performance would improve. To give a ball pack estimate, the buttons javascript is around 3x bigger than the actual entire react application... Unfortunately this would result in giving users and crawlers slightly different content on the page. I'm unsure about the possible SEO impact: Would Google mark the page as faster due to less resources to load? Or would it penalize the page for showing slightly different content to users and search engines?

    | rogier_slag
    0

  • I have a client who has a resources section. This section is primarily devoted to definitions of terms in the industry. These definitions appear in colored boxes that, when you click on them, turn into a lightbox with their own unique URL. Example URL: /resources/?resource=dlna The information for these lightboxes is pulled from a standard page: /resources/dlna. Both are indexed, resulting in over 500 indexed pages that are either a simple lightbox or a full page with very minimal content. My question is this: Should they be de-indexed? Another option I'm knocking around is working with the client to create Skyscraper pages, but this is obviously a massive undertaking given how many they have. Would appreciate your thoughts. Thanks.

    | Alces
    0

  • Hi there, So about 3.5 weeks ago I noticed my website (www.authenticstyle.co.uk) had gone from ranking in second place for our main key phrase "web design dorset" to totally dropping off the SERP's for that particular search phrase - it's literally no where to be seen. It seems that other pages of my website still rank, but the homepage. I then noticed that I had an unread alert in my Google Search Console account to say that a staging site we were hosting on a subdomain (the subdomain was domvs.authenticstyle.co.uk) had hacked content - it was a couple of PDF files with weird file names. The strange thing is we'd taken this staging site down a few weeks earlier, BUT one of my staff had left an A record set up in our Cloudflare account pointing to that staging server - they'd forgotten to remove it when removing the staging site. I then removed the A record, myself and submitted a reconsideration request on Google Search Console (which I still haven't received confirmation of) in the hope of everything sorting itself out. Since then I've also grabbed a Moz Pro account to try and dig a little deeper, but without any success. We have a few warnings for old 404's, some missing meta descs on some pages, and some backlinks that have accumulated over time that have hghish spam rating, but nothing major - nothing that would warrant a penalty as far as I can tell. From what I can make out, we've been issued a penalty on our homepage only, but I don't understand why we would get penalised for hacked content if that site domvs.authenticstyle.co.uk no longer existed (would it just be due to that erroneous A record we forgot to remove?). I contacted a few freelance SEO experts and one came back to me saying I'd done everything correctly and that I should see our site appearing again in a few days after submitting the reconsideration request. Its been 3 weeks and nothing. I'm at a huge loss as to how my site can recover from this. What would you recommend? I even tried getting our homepage to rank for a variation of "web design dorset", but it seems our homepage has been penalised for anything with "dorset" in the keyphrase. Any pointers would be HUGELY appreciated. Thanks in advance! Will

    | wsmith727
    0

  • Client had a website that was hacked about a year ago. Hackers went in and added a bunch of spam landing pages for various products. This was before the site had installed an SSL certificate. After the hack, the site was purged of the hacked pages and and SLL certificate was implemented. Part of that process involved setting up a rewrite that redirects http pages to the https versions. The trouble is that the spam pages are still being indexed by Google, even months later. If I do a site: search I still see all of those spam pages come up before most of the key "real" landing pages. The thing is, the listing on the SERP are to the http versions, so they're redirecting to the https version before serving a 404. Is there any way I can fix this without removing the rewrite rule?

    | SearchPros
    1

  • this my link can help me marble Egypt

    | saharali15
    1

  • Hi hopefully someone can help - pulling my hair out - can't seem to find where this redirect is coming from. Curently there is a redirect from http://bimi to https://bimi then to our real domain https://www.bimi But I can't find it - I have checked through hta access file, through YOAST redirects - any suggestions from anyone who has has this before in wordpress? | http://bimi.co | 23 | 0 | 3 | |
    |   |   | Associated Pages |   |
    |   |   | https://bimi.co/ | 23 | 1 |   |   |
    |   |   | Office Furniture Online | The UK's major Office Furniture Retailer | BiMi.cohttps://www.bimi.co/ | HTA access says the following - I have googled to see whether its causing it but none the wiser! BEGIN LSCACHE LITESPEED WP CACHE PLUGIN - Do not edit the contents of this block! <ifmodule litespeed="">RewriteEngine on
    CacheLookup on
    RewriteRule .* - [E=Cache-Control:no-autoflush]
    RewriteRule ^min/\w+.(css|js) - [E=cache-control:no-vary] marker CACHE RESOURCE start RewriteRule wp-content/./[^/](responsive|css|js|dynamic|loader|fonts).php - [E=cache-control:max-age=3600] marker CACHE RESOURCE end marker FAVICON start RewriteRule favicon.ico$ - [E=cache-control:max-age=86400] marker FAVICON end ###</ifmodule> LITESPEED WP CACHE PLUGIN - Do not edit the contents of this block! END LSCACHE BEGIN NON_LSCACHE LITESPEED WP CACHE PLUGIN - Do not edit the contents of this block! marker MINIFY start <ifmodule mod_rewrite.c="">RewriteEngine on
    RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} ^(.*)/min/(\w+).(css|js)$
    RewriteCond %1/wp-content/cache/$2/$1.$2 -f
    RewriteRule min/(\w+).(css|js) wp-content/cache/$2/$1.$2 [L]</ifmodule> marker MINIFY end LITESPEED WP CACHE PLUGIN - Do not edit the contents of this block! END NON_LSCACHE BEGIN WordPress <ifmodule mod_rewrite.c="">RewriteEngine On
    RewriteBase /
    RewriteRule ^index.php$ - [L]
    RewriteCond %{REQUEST_FILENAME} !-f
    RewriteCond %{REQUEST_FILENAME} !-d
    RewriteRule . /index.php [L]</ifmodule> END WordPress

    | KellyDSD86
    0

  • We have a https site and have been checking our 301 re-directs from the old http pages. All seem fine except one...and it is ONLY weird in Firefox (it works OK on Chrome and IE). The http version of that one URL is redirecting to the correct https URL, but with ?ref=wookmark being appended to the end. Why? On the Firefox browser only... http://www.easydigging.com/broadfork(dot)html 301 redirects to https://www.easydigging.com/broadfork(dot)html?ref=wookmark From the research I did Wookmark seems to be a JQuery feature, but we do not use it (as far as I know). And even if we do, it probably should not pop up when doing a 301 redirect. I did try clearing my cache a few times, with no change in the problem. Any help is appreciated 🙂

    | GregB123
    0

  • My client with alliedautotransport.com has a brother that owns hundreds of relevant websites that has great content on there, however, if we have him do some back linkings from those pages from the same server, would it hurt the rankings or make a difference?

    | SeobyKP
    1

  • Some developers always implement the hreflang for German (which should be "de") as "de-de", so language German and country Germany. There is usually no other German version targeting the other German-speaking countries (mostly ch, at). So obviously the recommendation is to make it "de" and that's the end. But I kept wondering and not finding anything: IF there is a more specialised hreflang, will google take that if there is no default? Example: Search in: de-at (or de-ch) Search result has the following hreflang versions: de-de; x-default (==en), en => Will Google give the result for x-default or de-de?

    | netzkern_AG
    0

  • Hi, I have a question relation to Canonical pages That i need clearing up. I am not sure that my bigcommere website is correctly configured and just wanted clarification from someone in the know. Take this page for example https://www.fishingtackleshop.com.au/barra-lures/ Canonical link is https://www.fishingtackleshop.com.au/barra-lures/ The Rel="next" link is https://www.fishingtackleshop.com.au/barra-lures/?sort=bestselling&page=2 and this page has a canonical tag as  rel='canonical' href='https://www.fishingtackleshop.com.au/barra-lures/?page=2' /> Is this correct as above and working as it should  or should the canonical tag for the second (pagination page) https://www.fishingtackleshop.com.au/barra-lures/?page=2     in our source code be saying  rel='canonical' href='https://www.fishingtackleshop.com.au/barra-lures/' />

    | oceanstorm
    0

  • Hi Guys, In organic SERPS Google pulling incorrect product image, instead of product image its showing image from relevant products, Checked the structured data, og:image everything is set to the product image, not sure why google showing images from relevant product sidebar, any help, please?

    | SpartMoz
    0

Got a burning SEO question?

Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.


Start my free trial


Looks like your connection to Moz was lost, please wait while we try to reconnect.