XCart Directory 301s Not Working
-
I'm working with someone to make fixes to an xcart site but I'm at a loss for some fixes. Some directory URLs had been changed around on their ecommerce site to make them more descriptive & more human friendly. The problem is that according to the team's coder, simple redirects won't work for the directories and mod rewrite and redirectmatch didn't work for some unknown reason.
I don't really know anything about xcart. I've made some basic changes and redirects before though their admin panel but I don't have any clue as to how to make directories 301 properly. Any insights? Thanks!
-
Their coder will be taking a look at it when he's freed up the time to see if there's a way to do that. Hopefully its something easy like that and doesn't require numerous workarounds to get going.
-
I don't know anything about xcart,but I know that some cms turn off htaccess, such as joomula, there may be a interface to turn it back on.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Unsolved URL dynamic structure issue for new global site where I will redirect multiple well-working sites.
Dear all, We are working on a new platform called [https://www.piktalent.com](link url), were basically we aim to redirect many smaller sites we have with quite a lot of SEO traffic related to internships. Our previous sites are some like www.spain-internship.com, www.europe-internship.com and other similars we have (around 9). Our idea is to smoothly redirect a bit by a bit many of the sites to this new platform which is a custom made site in python and node, much more scalable and willing to develop app, etc etc etc...to become a bigger platform. For the new site, we decided to create 3 areas for the main content: piktalent.com/opportunities (all the vacancies) , piktalent.com/internships and piktalent.com/jobs so we can categorize the different types of pages and things we have and under opportunities we have all the vacancies. The problem comes with the site when we generate the diferent static landings and dynamic searches. We have static landing pages generated like www.piktalent.com/internships/madrid but dynamically it also generates www.piktalent.com/opportunities?search=madrid. Also, most of the searches will generate that type of urls, not following the structure of Domain name / type of vacancy/ city / name of the vacancy following the dynamic search structure. I have been thinking 2 potential solutions for this, either applying canonicals, or adding the suffix in webmasters as non index.... but... What do you think is the right approach for this? I am worried about potential duplicate content and conflicts between static content dynamic one. My CTO insists that the dynamic has to be like that but.... I am not 100% sure. Someone can provide input on this? Is there a way to block the dynamic urls generated? Someone with a similar experience? Regards,
Technical SEO | | Jose_jimenez0 -
Adding directories to robots nofollow cause pages to have Blocked Resources
In order to eliminate duplicate/missing title tag errors for a directory (and sub-directories) under www that contain our third-party chat scripts, I added the parent directory to the robots disallow list. We are now receiving a blocked resource error (in Webmaster Tools) on all of the pages that have a link to a javascript (for live chat) in the parent directory. My host is suggesting that the warning is only a notice and we can leave things as is without worrying about the page being de-ranked/penalized. I am wondering if this is true or if we should remove the one directory that contains the js from the robots file and find another way to resolve the duplicate title tags?
Technical SEO | | miamiman1000 -
Does using data-href="" work more effectively than href="" rel="nofollow"?
I've been looking at some bigger enterprise sites and noticed some of them used HTML like this: <a <="" span="">data-href="http://www.otherodmain.com/" class="nofollow" rel="nofollow" target="_blank"></a> <a <="" span="">Instead of a regular href="" Does using data-href and some javascript help with shaping internal links, rather than just using a strict nofollow?</a>
Technical SEO | | JDatSB0 -
How Does Google's "index" find the location of pages in the "page directory" to return?
This is my understanding of how Google's search works, and I am unsure about one thing in specific: Google continuously crawls websites and stores each page it finds (let's call it "page directory") Google's "page directory" is a cache so it isn't the "live" version of the page Google has separate storage called "the index" which contains all the keywords searched. These keywords in "the index" point to the pages in the "page directory" that contain the same keywords. When someone searches a keyword, that keyword is accessed in the "index" and returns all relevant pages in the "page directory" These returned pages are given ranks based on the algorithm The one part I'm unsure of is how Google's "index" knows the location of relevant pages in the "page directory". The keyword entries in the "index" point to the "page directory" somehow. I'm thinking each page has a url in the "page directory", and the entries in the "index" contain these urls. Since Google's "page directory" is a cache, would the urls be the same as the live website (and would the keywords in the "index" point to these urls)? For example if webpage is found at wwww.website.com/page1, would the "page directory" store this page under that url in Google's cache? The reason I want to discuss this is to know the effects of changing a pages url by understanding how the search process works better.
Technical SEO | | reidsteven750 -
Why Do Transparent Networks Still Work
Hi Mozzers, My client has a major competitor that dominates several industry head terms. A check of their link profile reveals that they have 50 low DA domains that are identical to the main site, the only difference being that they all link to the main domain for these terms. They're not even attempting to disguise the network but it works. Can anyone tell me why? See: www.omega.com/vhpc/
Technical SEO | | waynekolenchuk0 -
How to prevent directory from being accessed by search engines?
Pretty much as the question says, is there any way to stop search engines from crawling a directory? I am working on a Wordpress installation for my site but don't want it to be listed in search engines until it's ready to be shown to the world. I know the simplest way is to password-protect the directory but I had some issues when I tried to implement that so I'd like to see if there's a way to do it without passwords. Thanks in advance.
Technical SEO | | Xee0 -
Need specifics about mod_proxy for blog domain and 301s
I am getting the IT staff to move our blog from "blog." to "/blog" using mod_proxy for apache, but I had a couple of questions about this I was hoping someone here might be able to help with. Is it correct that just setting up mod_proxy will make the blog available at both URLs? the "blog." subdomain and the "/blog" folder? If so, what is the best way to 301 redirect all traffic from "blog." to "/blog"? I assume this could be handled with a blanket 301 style rewrite, but I wanted to get some other opinions before getting with my IT guys to do it. I am technical enough to talk about this, but not do it myself, so experienced opinions are appreciated. Thanks!
Technical SEO | | SL_SEM0 -
Is there a work around for Rel Canonical without header access?
In my work as an SEO writer, I work closely with web designers and usually have behind the scenes access. However, the last three clients who hired me have web designers that are not allowing admin access to anyone else (including the clients) outside of their companies/small business. Is there a work around for the Rel Canonical element that usually is placed in the header? I am using All-In-One-SEO plug-in to address part of this issue. Sage advice or discussion on this is appreciated!
Technical SEO | | TheARKlady0