Problem indexing web developed with Ruby on Rails
-
Hi there!
Here we are again, we are having problems indexing one of our clients, which website has been developed with Ruby on Rails.
It doesnt get the titles right from almost all our pages...Has anyone had the same problem? Any feedback would help a lot...
Thanks!
-
Hi Eduardo,
For the titles this is probably due to google rewriting page titles based on brand searches. They have been experimenting with various ways of displaying titles in the serps for branded searches and if you are searching for 'jobsandtalent' with no spaces then this is a pretty specific search and google is rewriting you title based on it. If you search for your whole page title + brand you will see the normal title as expected. It does not have anything to do with Ruby on Rails.
As for the page rank, this is not a number I place much importance in. I cant remember off hand how often it is updated but it is not all the time. More to the point to be looking a moz domain and page metrics if you ask me. That being said I see your pr as 5 for the root domain www.jobandtalent.oom.
I noticed you seem to be using cookie based redirects from the main domain to the language folder so that if you have entered /es once then going to the .com main page automatically pushes you to .com/es. This can potentially be problematic in terms of google properly indexing you site. I cannot say if this is responsible for your difficulties in rankings but in a competitive sector like job postings I would certainly look changing that so that google (and users) can view all pages of the site in whichever language they choose without being pushed into a language based on cookies.
Hope that helps!
-
Lynn is correct, if you give a look we can see if we spot anything.
When you say they don't get the titles right, Google often changes the titles depending on the search term. But a site:domain.com search should bring up correct titles.
-
Hi Eduardo,
There is no reason why the language the site is developed in would have this affect since the page titles etc that the search engines read are in the final html produced, so if it looks right in the html it should look right to the crawlers. Same goes for the indexing of pages, although in that case there are more potential issues, but again none specific to ruby on rails. Care to give an example so we can have a look?
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Follow no-index
I have a question about the right way to not index pages: With a canonical or follow no-index. First we have a blog page: **Blogpage **
Technical SEO | | Happy-SEO
URL: /blog/
index follow Page 2 blog:
URL: /blog?=p2
index follow
rel="prev" /blog/
el="next" ?=p3 Nothing strange here i guess. But we also have other pages with chance on duplicate content: /SEO-category/
/SEO-category/view-more/ Because i don't want the "view-more" items to be indexed i want to set it on: follow no-index (follow to reach pages). But now the "view-more" also have pagination. What is the best way? Option 1:
/SEO-category/view-more/
Follow no-index /SEO-category/view-more?=p2
Follow no-index
rel="prev" /view-more/
el="next" ?=p3 Option 2: /SEO-category/view-more/
Canonical: /SEO-category/ /SEO-category/view-more?=p2
rel="prev" /view-more/
el="next" ?=p3 Option 3: Other suggests? Thanks!0 -
Anything new if determining how many of a sites pages are in Google's supplemental index vs the main index?
Since site:mysite.com *** -sljktf stopped working to find pages in the supplemental index several years ago has anyone found another way to identify content that has been regulated to the supplemental index?
Technical SEO | | SEMPassion0 -
Duplicate content problem
Hi there, I have a couple of related questions about the crawl report finding duplicate content: We have a number of pages that feature mostly media - just a picture or just a slideshow - with very little text. These pages are rarely viewed and they are identified as duplicate content even though the pages are indeed unique to the user. Does anyone have an opinion about whether or not we'd be better off to just remove them since we do not have the time to add enough text at this point to make them unique to the bots? The other question is we have a redirect for any 404 on our site that follows the pattern immigroup.com/news/* - the redirect merely sends the user back to immigroup.com/news. However, Moz's crawl seems to be reading this as duplicate content as well. I'm not sure why that is, but is there anything we can do about this? These pages do not exist, they just come from someone typing in the wrong url or from someone clicking on a bad link. But we want the traffic - after all the users are landing on a page that has a lot of content. Any help would be great! Thanks very much! George
Technical SEO | | canadageorge0 -
Https indexed...how?
Hello Moz, Since a while i am struggling with a SEO case: At the moment a https version of a homepage of a client of us is indexed in Google. Thats really strange because the url is redirected to an other website url for three weeks now. And we did everything to make clear to google that he has to index the other url.
Technical SEO | | Searchresult
So we have a few homepage urls A https://www.website.nl
B https://www.websites.nl/category
C http://www.websites.nl/category What we did: Redirected A with a 301 to B, a redirect from A or B to C is difficult because of the security issue with the ssl certificate. We put the right canonical url (VERSION C) on every version of the homepage(A,B) We only put the canonical urls in the sitemap.xml, only version C and uploaded it to Google Webmastertools We changed all important internal links to Version C We also get some valuable external backlinks to Version C Is there something i missed or i forget to say to Google hey look you've got the wrong url indexed, you have to index version C? How is it possible Google still prefers Version A after doing al those changes three weeks a go? I'am really looking forward to your answer. Thanks a lot in advanced! Greetz Djacko0 -
Wordpress Hatom problem
Hi, in Webmaster Tools i receive the following warnings: hatom-feedhatom-entry:Warning: At least one field must be set for HatomEntry.Warning: Missing required field "entry-title".Warning: Missing required field "updated".Warning: Missing required hCard "author".I googled a few strategies how to solve this problem but is it for SEO purpose really necessary to edit Theme core code to satisfy google's warnings?
Technical SEO | | reisefm0 -
Duplicate url problem causing me problems
Hi, i am working with a joomla site and i am using the sh404sef plugin. I have contacted the developer of the plugin who has not been very helpful so i am hoping to get help here. The problem i am having is, the description of the page showing in google listings is not the same as what i have put into the meta tag description. for example, for this page http://www.clairehegarty.co.uk/virtual-gastric-band-with-hypnotherapy the meta tag description should be Gastric Band Hypnotherapy to lose weight guaranteed. Free Gastric Band Hypnosis Consultations with Well Known Gastric Hypno Band expert as seen on TV. Hypno Gastric Band Works. We offer full support after your Gastric Band Hypnotherapy but in google it is showing Gastric Band Hypnotherapy Works. If you would like a slimmer and healthier body with all the benefits of weight loss surgery without any of the risks that can be ... now one thing i have noticed is: in the sh404sef control panel, i have noticed that i have the following index.php?option=com_content&Itemid=190&id=153&lang=en&view=article the above is the original url from day one but then i have the one below which is not the original index.php?option=com_content&Itemid=190&catid=150&id=153&lang=en&view=article i keep deleting the above which is not the original but it keeps coming back and i have been told this could be the fault can anyone please help me with this and solve how to stop it from coming back so google shows the correct description please.
Technical SEO | | ClaireH-1848860 -
Index page 404 error
Crawl Results show there is 404 error page which is index.htmk **it is under my root, ** http://mydomain.com/index.htmk I have checked my index page on the server and my index page is index.HTML instead of index.HTMK. Please help me to fix it
Technical SEO | | semer0 -
Images Reference Other Web Server
One of my real estate clients has a website that was built by a small web design company. In reviewing the website I've discovered that many of the images on the website (ie. banners, social networking icons, ect.) are not hosted on my clients server, but on the web developers server. Ex. src="http://www.[WebDevelopmentCompany].com/ubertor/[ClientsName]/properties_image.jpg" Will this funnel pagerank/link juice away from my client's website? This struck me as odd and it's not an issue I've come across before.
Technical SEO | | calin_daniel0