Very Quick Joomla Question
-
Hi,
A client's site was previously built in Joomla and he wants us to reproduce content that was in there, but the Joomla site is no longer live and has come to me as an archive containing all the files and folders that were included.
So, I am looking at the files and folders without Joomla installed.
Can someone tell me quickly how to find the where the actual page content was stored?
I started looking, but there are some folders I cannot open and nothing that looks as I expected.
Would appreciate a hint or two from someone who knows Joomla well.. Life is too short!
Thanks
Sha
-
Thanks Keri,
Yep, it was the Db I was searching for, but the site was a GoDaddy install and was moved to different server because of software install issues. So, having checked Wayback, Google cache and searched through the files without finding a Database, I think I can just refill my mug and settle in to write some new content.
To be honest, that's not really so bad - I'd rather start from scratch than rewrite bad copy any day! Less than a dozen pages, so its all good.
Most importantly, I now know that I am still sane and not as stupid as I was starting to fear, so thanks for helping resolve that!
Have a great weekend!
Sha
-
I don't have experience with Joomla, and have just cursed over a website in Mambo, but the first part of these instructions for copying a Joomla website make it look like the content is in a MySQL database much like Wordpress. I don't know where right off to look for that database, or if the client would have it in the archive you were given, but it's at least a start of where to look.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
URL Question: Is there any value for ecomm sites in having a reverse "breadcrumb" in the URL?
Wondering if there is any value for e-comm sites to feature a reverse breadcrumb like structure in the URL? For example: Example: https://www.grainger.com/category/anchor-bolts/anchors/fasteners/ecatalog/N-8j5?ssf=3&ssf=3 where we have a reverse categorization happening? with /level2-sub-cat/level1-sub-cat/category in the reverse order as to the actual location on the site. Category: Fasteners
Technical SEO | | ROI_DNA
Sub-Cat (level 1): Anchors
Sub-Cat (level 2): Anchor Bolts0 -
Question about breaking out content from one site onto many
We have a website and domain -- which is well-established (since 1998) -- that we are considering breaking apart for business reasons. This is a content site that hosts articles from a few of our brands in portal fashion. These brands are represented in print with their own magazines so it's important to keep their presence separate. All of the content on the site is related to a general industry, with each brand covering a unique segment in the industry. For example, think of a toy industry site that hosts content from it's brands covering stuffed animals, electronics and board games. The current thinking is to break out the content from a couple brands to their own sites and domains. The business case for this branding purposes. I'm of the opinion that this is a bad idea as we would likely see a noticeable decline in search traffic across the board, which we rely on for impressions for our advertisers. If we take the appropriate steps to carefully redirect pages to the new domains what kind of hit should we expect to take from this transition? Would it make much difference if we were transition from 1 to 2 sites vs 1 to 4? Should this move be avoided all together? Any advise would be appreciated.
Technical SEO | | accessintel0 -
URL redirect question
Hi all, Just wondering whether anybody has experience of CMSs that do a double redirect and what affect that has on rankings. here's the example /page.htm is 301 redirected to /page.html which is 301 redirected to /page As Google has stated that 301 redirects pass on benefits to the new page, would a double redirect do the same? Looking forward to hearing your views.
Technical SEO | | A_Q0 -
Joomla Title Help - Plugin?
I am currently working on a site in Joomla (2.5) and continue to run across issues with titles. Currently, the site is set to display the page title just over the content. Turning this off really is not an option, as it is necessary to keep these titles as part of the content. Right now, my page titles are not well optimized. What I would like to have is a separate meta title used for SEO purposes, similar to what a plugin like Yoast would do in WordPress - where there is a name for a page (often used in content as well), but a meta title that is used by Google. Hope that makes sense. If anyone knows of a plugin that may be helpful, that would be great. Thanks!
Technical SEO | | DeliaAssociates0 -
SEOMoz Crawler vs Googlebot Question
I read somewhere that SEOMoz’s crawler marks a page in its Crawl Diagnostics as duplicate content if it doesn’t have more than 5% unique content.(I can’t find that statistic anywhere on SEOMoz to confirm though). We are an eCommerce site, so many of our pages share the same sidebar, header, and footer links. The pages flagged by SEOMoz as duplicates have these same links, but they have unique URLs and category names. Because they’re not actual duplicates of each other, canonical tags aren’t the answer. Also because inventory might automatically come back in stock, we can’t use 301 redirects on these “duplicate” pages. It seems like it’s the sidebar, header, and footer links that are what’s causing these pages to be flagged as duplicates. Does the SEOMoz crawler mimic the way Googlebot works? Also, is Googlebot smart enough not to count the sidebar and header/footer links when looking for duplicate content?
Technical SEO | | ElDude0 -
Schema address question
I have a website that has a contact us page... of course and on that page I have schema info pointing out the address and a few other points of data. I also have the address to the business location in the footer on every page. Would it be wiser to point to the schema address data on the footer instead of the contact page? And are there any best practices when it comes down to how many times you can point to the same data, and on which pages? So should I have schema address on the contact us page and the footer of that page, that would be twice, which could seem spammy. Haven't been able to find much best practices info on schema out there. Thanks, Cy
Technical SEO | | Nola5040 -
Robots.txt questions...
All, My site is rather complicated, but I will try to break down my question as simply as possible. I have a robots.txt document in the root level of my site to disallow robot access to /_system/, my CMS. This looks like this: # /robots.txt file for http://webcrawler.com/
Technical SEO | | Horizon
# mail webmaster@webcrawler.com for constructive criticism **User-agent: ***
Disallow: /_system/ I have another robots.txt file in another level down, which is my holiday database - www.mysite.com/holiday-database/ - this is to disallow access to /holiday-database/ControlPanel/, my database CMS. This looks like this: **User-agent: ***
Disallow: /ControlPanel/ Am I correct in thinking that this file must also be in the root level, and not in the /holiday-database/ level? If so, should my new robots.txt file look like this: # /robots.txt file for http://webcrawler.com/
# mail webmaster@webcrawler.com for constructive criticism **User-agent: ***
Disallow: /_system/
Disallow: /holiday-database/ControlPanel/ Or, like this: # /robots.txt file for http://webcrawler.com/
# mail webmaster@webcrawler.com for constructive criticism **User-agent: ***
Disallow: /_system/
Disallow: /ControlPanel/ Thanks in advance. Matt0 -
Parameter Handling - Nourls Question
We're trying to make sense of Google's new parameter handling options and I seem unable to find a good answer to an issue regarding the NoUrl option. For ex. we have two Urls pointing to the same content: http://www.propertyshark.com/mason/ny/New-York-City/Maps/Manhattan-Apartment-Sales-Map?zoom=1&x=0.518&y=0.3965 http://www.propertyshark.com/mason/ny/New-York-City/Maps/Manhattan-Apartment-Sales-Map?zoom=2&x=0.518&y=0.3965 Ideally, I would want Google to index only the main Url without any parameters, so http://www.propertyshark.com/mason/ny/New-York-City/Maps/Manhattan-Apartment-Sales-Map To do this, I would set the value No Urls for the zoom, x and y parameters. By doing this do we still get any SEO value from back links that point to the URLs with the parameters, or will Google just ignore them?
Technical SEO | | propertyshark0