Question about Robot.txt
-
I just started my own e-commerce website and I hosted it to one of the popular e-commerce platform Pinnacle Cart. It has a lot of functions like, page sorting, mobile website, etc. After adjusting the URL parameters in Google webmaster last 3 weeks ago, I still get the same duplicate errors on meta titles and descriptions based from Google Crawl and SEOMOZ crawl. I am not sure if I made a mistake of choosing pinnacle cart because it is not that flexible in terms of editing the core website pages. There is now way to adjust the canonical, to insert robot.txt on every pages etc. however it has a function to submit just one page of robot.txt. and edit the .htcaccess. The website pages is in PHP format.
For example this URL:
www.mycompany.com has a duplicate title and description with www.mycompany.com/site-map.html (there is no way of editing the title and description of my sitemap)
Another error is
www.mycompany.com has a duplicate title and description with http://www.mycompany.com/brands?url=brands
Is it possible to exclude those website with "url=" and my "sitemap.html" in the robot.txt? or the URL parameters from Google is enough and it just takes a lot of time.
Can somebody help me on the format of Robot.txt. Please? thanks
-
Thank you for your reply. This surely helps. I will probably edit the htaccess.
-
That's the problem with most sitebuilder type prgrams, they are very limited.
Perhaps look at your site title, and page titles. Usually the site title will be the included on all of your webpages followed by the page title so you could simply name your site www.yourcompany.com then add an individual page title to each page.
A robots.txt file is not supposed to be added to every page and only tells the bots what to crawl, and what not to.
If you can edit the htaccess, you should be able to get to the individual pages and insert/change the code for titles, just be aware that doing it manually can work, but sometimes when you go back to make an edit in the builder it may undo all of your manual changes, if that's the case, get your site perfect, then do the individual code changes as the last change.
Hope this helps.
-
I have no way of adding those too. Ooops thanks for the warning. I guess I would have to wait for Google to filter out the parameters.
Thanks for your answer.
-
You certainly don't want to block your sitemap file in robots.txt. It takes some time for Google to filter out the parameters and that is the right approach. If there is no way to change the title, I wouldn't be so concerned over a few pages with duplicate titles. Do you have the ability to add a noindex,follow meta tag on these pages?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Adding your sitemap to robots.txt
Hi everyone, Best practice question: When adding your sitemap to your robots.txt file, do you add the whole sitemap at once or do you add different subcategories (products, posts, categories,..) separately? I'm very curious to hear your thoughts!
Technical SEO | | WeAreDigital_BE0 -
Duplicate video content question
This is really two questions in one. 1. If we put a video on YouTube and on our site via Wistia, how would that affect our rankings/authority/credibility? Would we get punished for duplicate video content? 2. If we put a Wistia hosted video on our website twice, on two different pages, we would get hit for having duplicate content? Any other suggestions regarding hosting on Wistia and YouTube versus just Wistia for product videos would be much appreciated. Thank you!
Technical SEO | | ShawnHerrick1 -
Site command / Footprint Question
Hi All, I am looking for websites with keywords in the domain and I am using: inurl:keyword/s The results that come back include sub-pages and not only domains with the keywords in the root domain. example of what i mean: www.website.com/keyword/ What I want displayed only: www.keyword/s.com Does anyone know of a site command i can use to display URL's with keywords in the root domain only? Thanks in Advance Greg
Technical SEO | | AndreVanKets0 -
Google Change of Address with Questionable Backlink Profile
We have a .com domain where we are 301-ing the .co.uk site into it before shutting it down - the client no longer has an office in the UK and wants to focus on the .com. The .com is a nice domain with good trust indicators. I've just redesigned the site, added a wad of healthy structured markup, had the duplicate content mostly rewritten - still finishing off this job but I think we got most of it with Copyscape. The site has not so many backlinks, but we're working on this too and the ones it does have are natural, varied and from trustworthy sites. We also have a little feature on the redesign coming up in .Net magazine early next year, so that will help. The .co.uk on the other hand has a fair few backlinks - 1489 showing in Open Site Explorer - and I spent a good amount of time matching the .co.uk pages to similar content on the .com so that the redirects would hopefully pass some pagerank. However, approximately a year later, we are struggling to grow organic traffic to the .com site. It feels like we are driving with the handbrake on. I went and did some research into the backlink profile of the .co.uk, and it is mostly made up of article submissions, a few on 'quality' (not in my opinion) article sites such as ezine, and the majority on godawful and broken spammy article sites and old blogs bought for seo purposes. So my question is, in light of the fact that the SEO company that 'built' these shoddy links will not reply to my questions as to whether they received a penalty notification or noticed a Penguin penalty, and the fact that they have also deleted the Google Analytics profiles for the site, how should I proceed? **To my mind I have 3 options. ** 1. Ignore the bad majority in the .co.uk backlink profile, keep up the change of address and 301's, and hope that we can just drown out the shoddy links by building new quality ones - to the .com. Hopefully the crufty links will fade into insignificance over time.. I'm not too keen on this course of action. 2. Use the disavow tool for every suspect link pointing to the .co.uk site (no way I will be able to get the links removed manually) - and the advice I've seen also suggests submitting a reinclusion request afterwards- but this seems pointless considering we are just 301-ing to the new (.com) site. 3. Disassociate ourselves completely from the .co.uk site - forget about the few quality links to it and cut our losses. Remove the change of address request in GWT and possibly remove the site altogether and return 410 headers for it just to force the issue. Clean slate in the post. What say you mozzers? Please help, working myself blue in the face to fix the organic traffic issues for this client and not getting very far as yet.
Technical SEO | | LukeHardiman0 -
Magento Robots & overly dynamic URL-s
How can i block all URL-s on a Magento store that have 2 or more dynamic parameters in it, since all the parameters have attribute name in it and not some uniform ID Would something like: Disallow: /?&* work? Since the only thing that is constant throughout all the custom parameters is that they are separated with "&" Thanks 🙂
Technical SEO | | tilenkrivec0 -
Meta tags question - imagetoolbar
We inherited some sites from another vendor & they have these tags in the head of all pages. Are they of any value at all? Thanks for the help! Wick Smith
Technical SEO | | wcksmith0 -
Pages not ranking - Linkbuilding Question
It has been about 3 months since we made some new pages, with new, unique copy, but alot of pages (even though they have been indexed) are not ranking in the SERPS I tested it by taking a long snippet of the unique copy form the page and searching for it on Google. Also I checked the ranking using http://arizonawebdevelopment.com/google-page-rank
Technical SEO | | Impact-201555
Which may no be accurate, I know, but would give some indication. The interesting thing was that for the unique copy snippets, sometimes a different page of our site, many times the home page, shows up in the SERP'sSo my questions are: Is there some issue / penalty / sandbox deal with the pages that are not indexed? How can we check that? Or has it just not been enough time? Could there be any duplicate copy issue going on? Shouldn't be, as they are all well written, completely unique copy. How can we check that? Flickr image details - Some of the pages display the same set of images from flickr. The details (filenames, alt info, titles) are getting pulled form flickr and can be seen on the source code. Its a pretty large block of words, which is the same on multiple pages, and uses alot of keywords. Could this be an issue considered duplication or keyword stuffing, causing this. If you think so , we will remove it right away. And then when do we do to improve re-indexing? The reason I started this was because we have a few good opportunities right now for links, and I was wondering what pages we should link to and try to build rankings for. I was thinking about pointing one to /cast-bronze-plaques, but the page is not ranking. The home page, obviously is the oldest page, and ranked the best. The cast bronze plaques page is very new. Would linking to pages that are not ranking well be a good idea? Would it help them to get indexed / ranking? Or would it be better to link to the pages that are already indexed / ranking? If you link to a page that does not seem to be indexed, will it help the domains link profile? Will the link juice still flow through the site0