Session IDs and crawlers
-
Hello here.
When we setup our e-commerce website virtualsheetmusic.com to allow session IDs to be assigned to users back in 2001, we decided to not assign them if a bot called the page. We wanted to be sure that bots, which officially can't store cookies, wouldn't have found links containing every time different session IDs . Just to better clarify, the way session IDs are generated on our system, is the standard way: if users have cookies enabled, a cookie called PHPSESSID is created which stores the session cookie. If the cookies are not enabled, session IDs are added automatically by the system to any link URL included on the page which could potentially cause the bots to find every time different link URLs with the session ID appended to them.
Now, after 12 years, we are considering if this is still a valuable solution, or can it be detrimental or negative in some way? What are your thoughts about this issue?
Thank you in advance for any thoughts.
Fab.
-
Thank you Kurt, that's exactly what I thought but I wanted to have confirmation from the experts community.
Thank you again!
-
You're not giving the search engines different content, so it's not deceptive. I can't think of any way it would harm you.
-
Thank you guys for your replies and insights, I am more for keeping what we have lived with so far, which means leaving the system on our site disabling session IDs when bots request the pages, unless you tell me there is any downsides to do that... that's really what I am trying to find out here. Is there any downside to not serve pages with session IDs to search engines compared to users?
Thank you again.
-
I can't speak to the technical side of setting up session IDs, however, you can deal with the URL issue with canonical tags and setting up URL parameters in Google and Bing Webmaster Tools. That should prevent the search engines from indexing every URL with a different session id and keep all the page authority on the main URL.
Kurt Steinbrueck
OurChurch.Com -
Hi Fabrizo,
Nowadays I would say there are better solutions to fix this issue. But I'm really not sure if you could convince me for rebuilding this feature on the site as the impact for SEO would probably be not really big. I think the best way to not set any Session IDs in the URL at all so you have plain URLs. What you then could do is use these pages as the basis of your URL structure and SEO strategy. You could also then canonicalize the session ID'ed urls back to the plain ones.
Hope this helps! Makes sense?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Drop in featured snippets and sessions following algorithm update
Seemingly following the Feb 2021 algorithm update (passage indexing) our site has seen a massive drop in featured snippets (from 39 on 8th Feb to 27 on 15th Feb) and hasn't recovered (still 27 on 18th March). Our site has also seen a steady drop off in sessions in the UK over the past couple of weeks, despite continued increases in the US. This could potentially be due to the loss of featured snippets meaning less traffic to our top pages, but we're wondering if this could also be another change, separate to featured snippets? Does anyone know if these changes are likely to be linked to the passage indexing algorithm update (or another unconfirmed update) or simply a coincidence and either way, what we can start doing to see our number of featured snippets (and then hopefully sessions) start to increase again? EDIT: As of 22th March, these have dropped again to just 10 featured snippets - a total of 290% decrease - any help would we appreciated! TIA 🙂
Algorithm Updates | | M-Widdows0 -
Our Sites Organic Traffic Went Down Significantly After The June Core Algorithm Update, What Can I Do?
After the June Core Algorithim Update, the site suffered a loss of about 30-35% of traffic. My suggestions to try to get traffic back up have been to add metadata (since the majority of our content is lacking it), as well ask linking if possible, adding keywords to alt images, expanding and adding content as it's thin content wise. I know that from a technical standpoint there are a lot of fixes we can implement, but I do not want to suggest anything as we are onboarding an SEO agency soon. Last week, I saw that traffic for the site went back to "normal" for one day and then saw a dip of 30% the next day. Despite my efforts, traffic has been up and down, but the majority of organic traffic has dipped overall this month. I have been told by my company that I am not doing a good job of getting numbers back up, and have been given a warning stating that I need to increase traffic by 25% by the end of the month and keep it steady, or else. Does anyone have any suggestions? Is it realistic and/or possible to reach that goal?
Algorithm Updates | | NBJ_SM2 -
Is using REACT SEO friendly?
Hi Guys Is REACT SEO friendly? Has anyone used REACT and what was the results? Or do you recommend something else that is better suited for SEO? Many thanks for your help in advance. Cheers Martin
Algorithm Updates | | martin19700 -
Mobile Usability Issues after Mobile Frist
Hi All A couple months ago we got an email from google, telling us - Mobile-first indexing enabled for https://www.impactsigns.com/ Ran the test on MOZ, Mobile usability shows 100% Last week got an email from google - New Mobile usability issues detected for impactsigns.com Top new issues found, ordered by number of affected pages: Content wider than screen Clickable elements too close together I can not seem to figure out what those issues are, as all content is visible. How important are these 2 issues? Since we are now on the mobile first side?
Algorithm Updates | | samoos0 -
Meta robots at every page rather than using robots.txt for blocking crawlers? How they'll get indexed if we block crawlers?
Hi all, The suggestion to use meta robots tag rather than robots.txt file is to make sure the pages do not get indexed if their hyperlinks are available anywhere on the internet. I don't understand how the pages will be indexed if the entire site is blocked? Even though there are page links are available, will Google really index those pages? One of our site got blocked from robots file but internal links are available on internet for years which are not been indexed. So technically robots.txt file is quite enough right? Please clarify and guide me if I'm wrong. Thanks
Algorithm Updates | | vtmoz0 -
What does it mean to build a 'good' website.
Hi guys. I've heard a lot of SEO professionals, Google, (and Rand in a couple of whiteboard Friday's) say it's really important to build a 'good' website if you want to rank well. What does this mean in more practical terms? (Context... I've found some sites rank much better than they 'should' do based on the competition. However, when I built my own site (well-optimised (on-page) based on thorough keyword research) it was nowhere to be found (not even top 50 after I'd 'matched' the backlink profile of others on page 1). I can only put this down to there being 'good quality website' signals lacking in the latter example. I'm not a web developer so the website was the pretty basic WordPress site.)
Algorithm Updates | | isaac6630 -
I'm Pulling Hairs! - Duplicate Content Issue on 3 Sites
Hi, I'm an SEO intern trying to solve a duplicate content issue on three wine retailer sites. I have read up on the Moz Blog Posts and other helpful articles that were flooded with information on how to fix duplicate content. However, I have tried using canonical tags for duplicates and redirects for expiring pages on these sites and it hasn't fixed the duplicate content problem. My Moz report indicated that we have 1000s of duplicates content pages. I understand that it's a common problem among other e-commerce sites and the way we create landing pages and apply dynamic search results pages kind of conflicts with our SEO progress. Sometimes we'll create landing pages with the same URLs as an older landing page that expired. Unfortunately, I can't go around this problem since this is how customer marketing and recruitment manage their offers and landing pages. Would it be best to nofollow these expired pages or redirect them? Also I tried to use self-referencing canonical tags and canonical tags that point to the higher authority on search results pages and even though it worked for some pages on the site, it didn't work for a lot of the other search result pages. Is there something that we can do to these search result pages that will let google understand that these search results pages on our site are original pages? There are a lot of factors that I can't change and I'm kind of concerned that the three sites won't rank as well and also drive traffic that won't convert on the site. I understand that Google won't penalize your sites with duplicate content unless it's spammy. So If I can't fix these errors -- since the company I work conducts business where we won't ever run out of duplicate content -- Is it worth going on to other priorities in SEO like Keyword research, On/Off page optimization? Or should we really concentrate on fixing these technical issues before doing anything else? I'm curious to know what you think. Thanks!
Algorithm Updates | | drewstorys0 -
In the body of index page i want to be able to add text that can be picked up by crawlers but I do not want these text to be visible? How can I code this?
in the body of index page i want to be able to add text that can be picked up by crawlers but I do not want these text to be visible? How can I code this?
Algorithm Updates | | FinindDesign0