Brackets in a URL String
-
Was talking with a friend about this the other day.
Do Brackets and or Braces in a URL string impact SEO? (I know short human readable etc... but for the sake of conversation has anyone relaised any impacts of these particular Characters in a URL?
-
Thanks for the feedback.
I personally don't think any SEO Question is Naive, there are so many permutations and possibilities. And if you work with a large company and Large Dev Teams with Legacy Systems you are often put in scenarios that little questions like this are important and can dictate how entire scope may or may not be realised. I am compiling a comprehensive URL guidelines for my Devs (which I will certainly share with the SEOmoz community when complete.)
A little sideline note: Someone once told me Pipe Characters "|" won't impact a URL string. However, they actually frequently break, get truncated or double encoded when being crawled from Google/Bing with little rhyme or reason.
The question about Brackets/Braces Stemmed from this link from Coke. http://www.cokeunleashed.com.au/wagsallowance.jsp?mkwid={ifsearch:s}{ifcontent:c}WVnyieg7&pcrid=13521527730
-
Sorry... maybe I've just seen too many naive question in a too short frame of time
-
Yes agree there are other more pressing issues to debate but I don't think this question is out of place. I welcome people to post any type of SEO question here no matter how silly they think it might be. We're all on different stages of learning, but more than once I have been inspired by a naive question to look up something which lead to a discovery of immense value and interest to me and others in the SEO community.
Brackets from what I can see do not represent a significant element, in theory they can act as a separator of different logical units of a URL (though there are much better ways to do that such as using hyphens). I would personally not use them and not sure if they represent a problem in way crawlers interpret URL structure.
-
No... they are just awful.
Anyway... sorry to cite myself and link to a post I wrote... http://www.iloveseo.net/the-ten-crappiest-mythical-seo-questions-we-should-not-ask-anymore/
Without offence, but is it not better thinking to more relevant SEO problems?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Duplicate content on URL trailing slash
Hello, Some time ago, we accidentally made changes to our site which modified the way urls in links are generated. At once, trailing slashes were added to many urls (only in links). Links that used to send to
Intermediate & Advanced SEO | | yacpro13
example.com/webpage.html Were now linking to
example.com/webpage.html/ Urls in the xml sitemap remained unchanged (no trailing slash). We started noticing duplicate content (because our site renders the same page with or without the trailing shash). We corrected the problematic php url function so that now, all links on the site link to a url without trailing slash. However, Google had time to index these pages. Is implementing 301 redirects required in this case?1 -
Duplicate URL Parameters for Blog Articles
Hi there, I'm working on a site which is using parameter URLs for category pages that list blog articles. The content on these pages constantly change as new posts are frequently added, the category maybe for 'Heath Articles' and list 10 blog posts (snippets from the blog). The URL could appear like so with filtering: www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016 www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016&page=1 All pages currently have the same Meta title and descriptions due to limitations with the CMS, they are also not in our xml sitemap I don't believe we should be focusing on ranking for these pages as the content on here are from blog posts (which we do want to rank for on the individual post) but there are 3000 duplicates and they need to be fixed. Below are the options we have so far: Canonical URLs Have all parameter pages within the category canonicalize to www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general and generate dynamic page titles (I know its a good idea to use parameter pages in canonical URLs). WMT Parameter tool Tell Google all extra parameter tags belong to the main pages (e.g. www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general&year=2016&page=3 belongs to www.domain.com/blog/articles/?taxonomy=health-articles&taxon=general). Noindex Remove all the blog category pages, I don't know how Google would react if we were to remove 3000 pages from our index (we have roughly 1700 unique pages) We are very limited with what we can do to these pages, if anyone has any feedback suggestions it would be much appreciated. Thanks!
Intermediate & Advanced SEO | | Xtend-Life0 -
Image URL Change Catastrophe
We have a site with over 3mm pages indexed, and an XML sitemap with over 12mm images (312k indexed at peak). Last week our traffic dropped off a cliff. The only major change we made to the site in that time period was adding a DNS record for all of our images that moved them from a SoftLayer Object Storage domain to a subdomain of our site. The old URLs still work, but we changed all the links from across our site to the new subdomain. The big mistake we made was that we didn't update our XML sitemap to the new URLs until almost a week after the switch (totally forgot that they were served from a process with a different config file). We believe this was the cause of the issue because: The pages that dropped in traffic were the ones where the images moved, while other pages stayed more or less the same. We have some sections of our property where the images are, and have always been, hosted by Amazon and their rankings didn't crater. Same with pages that do not have images in the XML sitemap (like list pages). There wasn't a change in geographic breakdown of our traffic, which we looked at because the timing was around the same time as Pigeon. There were no warnings or messages in Webmaster Tools, to indicate a manual action around something unrelated. The number of images indexed in our sitemap according Webmaster Tools dropped from 312k to 10k over the past week. The gap between the change and the drop was 5 days. It takes Google >10 to crawl our entire site, so the timing seems plausible. Of course, it could be something totally unrelated and just coincidence, but we can't come up with any other plausible theory that makes sense given the timing and pages affected. The XML sitemap was updated last Thursday, and we resubmitted it to Google, but still no real change. Anyone had a similar experience? Any way to expedite the climb back to normal traffic levels? Screen%20Shot%202014-07-29%20at%203.38.34%20PM.png
Intermediate & Advanced SEO | | wantering0 -
Which URL structure is much better?
Hi Everybody, Which URL structure is much better? Type 01. http://www.domain.com/category-a/
Intermediate & Advanced SEO | | cprasad
http://www.domain.com/category-a/subcategory-a-1/
http://www.domain.com/category-a/subcategory-a-2/
http://www.domain.com/category-b/
http://www.domain.com/category-b/subcategory-b-1/
http://www.domain.com/category-b/subcategory-b-2/ Type 02. http://www.domain.com/category-a/
http://www.domain.com/subcategory-a-1/
http://www.domain.com/subcategory-a-2/
http://www.domain.com/category-b/
http://www.domain.com/subcategory-b-1/
http://www.domain.com/subcategory-b-2/ How these 2 types can affect for Ranking, Site Links in Google and passing PR from root to other pages? Thanks Prasad0 -
Lots of incorrect urls indexed - Googlebot found an extremely high number of URLs on your site
Hi, Any assistance would be greatly appreciated. Basically, our rankings and traffic etc have been dropping massively recently google sent us a message stating " Googlebot found an extremely high number of URLs on your site". This first highligted us to the problem that for some reason our eCommerce site has recently generated loads (potentially thousands) of rubbish urls hencing giving us duplication everywhere which google is obviously penalizing us with in the terms of rankings dropping etc etc. Our developer is trying to find the route cause of this but my concern is, How do we get rid of all these bogus urls ?. If we use GWT to remove urls it's going to take years. We have just amended our Robot txt file to exclude them going forward but they have already been indexed so I need to know do we put a redirect 301 on them and also a HTTP Code 404 to tell google they don't exist ? Do we also put a No Index on the pages or what . what is the best solution .? A couple of example of our problems are here : In Google type - site:bestathire.co.uk inurl:"br" You will see 107 results. This is one of many lot we need to get rid of. Also - site:bestathire.co.uk intitle:"All items from this hire company" Shows 25,300 indexed pages we need to get rid of Another thing to help tidy this mess up going forward is to improve on our pagination work. Our Site uses Rel=Next and Rel=Prev but no concanical. As a belt and braces approach, should we also put concanical tags on our category pages whereby there are more than 1 page. I was thinking of doing it on the Page 1 of our most important pages or the View all or both ?. Whats' the general consenus ? Any advice on both points greatly appreciated? thanks Sarah.
Intermediate & Advanced SEO | | SarahCollins0 -
URL Structure for Directory Site
We have a directory that we're building and we're not sure if we should try to make each page an extension of the root domain or utilize sub-directories as users narrow down their selection. What is the best practice here for maximizing your SERP authority? Choice #1 - Hyphenated Architecture (no sub-folders): State Page /state/ City Page /city-state/ Business Page /business-city-state/
Intermediate & Advanced SEO | | knowyourbank
4) Location Page /locationname-city-state/ or.... Choice #2 - Using sub-folders on drill down: State Page /state/ City Page /state/city Business Page /state/city/business/
4) Location Page /locationname-city-state/ Again, just to clarify, I need help in determining what the best methodology is for achieving the greatest SEO benefits. Just by looking it would seem that choice #1 would work better because the URL's are very clear and SEF. But, at the same time it may be less intuitive for search. I'm not sure. What do you think?0 -
Googlebot crawling partial URLs
Hi guys, I've checked my email this morning and I've got a number of 404 errors over the weekend where Google has tried to crawl some of my existing pages but not found the full URL. Instead of hitting 'domain.com/folder/complete-pagename.php' it's hit 'domain.com/folder/comp'. This is definitely Googlebot/2.1; http://www.google.com/bot.html (66.249.72.53) but I can't find where it would have found only the partial URL. It certainly wasn't on the domain it's crawling and I can't find any links from external sites pointing to us with the incorrect URL. GoogleBot is doing the same thing across a single domain but in different sub-folders. Having checked Webmaster Tools there aren't any hard 404s and the soft ones aren't related and haven't occured since August. I'm really confused as to how this is happening.. Thanks!
Intermediate & Advanced SEO | | panini0 -
Quick URL structure question
Say you've got 5,000 articles. Each of these are from 2-3 generations of taxonomy. For example: example.com/motherboard/pc/asus39450 example.com/soundcard/pc/hp39 example.com/ethernet/software/freeware/stuffit294 None of the articles were SUPER popular as is, but they still bring in a bit of residual traffic combined. Few thousand or so a day. You're switching to a brand new platform. Awesome new structure, taxonomy, etc. The real deal. But, historically, you don't have the old taxonomy functions. The articles above, if created today, file under example.com/hardware/ This is the way it is from here on out. But what to do with the historical files? keep the original URL structure, in the new system. Readers might be confused if they try to reach example.com/motherboard, but at least you retain all SEO weight and these articles are all older anyways. Who cares? Grab some lunch. change the urls to /hardware/, and redirect everything the right way. Lose some rank maybe, but its a smooth operation, nice and neat. Grab some dinner. change the urls to /hardware/ DONT redirect, surprise Google with 5k articles about old computer hardware. Magical traffic splurge, go skydiving. Panic, cry into your pillow. Get job signing receipts at CostCo Thoughts?
Intermediate & Advanced SEO | | EricPacifico0