Moz Q&A is closed.
After more than 13 years, and tens of thousands of questions, Moz Q&A closed on 12th December 2024. Whilst we’re not completely removing the content - many posts will still be possible to view - we have locked both new posts and new replies. More details here.
URL in russian
-
Hi everyone,
I am doing an audit of a site that currently have a lot of 500 errors due to the russian langage.
Basically, all the url's look that way for every page in russian:
http://www.exemple.com/ru-kg/pешения-для/food-packaging-machines/
http://www.exemple.com/ru-kg/pешения-для/wood-flour-solutions/
http://www.exemple.com/ru-kg/pешения-для/cellulose-solutions/I am wondering if this error is really caused by the server or if Google have difficulty reading the russian langage in URL's.
Is it better to have the URL's only in english ?
-
Hi Alexandre,
Google should have no problem indexing URLs with Cyrillic characters, but it could be the mix of language that is causing Google to attempt to decode those characters.
But even if that were the case, this should not result in a 500 error but a 404 (not found) for those resultant decoded URLs.
It looks like there are 301 redirects in place for these URLs now, pointing to their EN counterparts - has that resolved this issue? Perhaps it was faulty redirect logic in the first place that caused the 500 errors?
Thanks,
Mike -
Yes exactly !
-
I do believe the URLs are indexed (based on his url) and I know that you can use non-english characters in URLs.
Do you get the 500 error when you fetch as google for a url?
-
To give you an exemple, Google is giving 500 errors like this :
http://www.exemple.com/ru-lt/pÐµÑˆÐµÐ½Ð¸Ñ -Ð´Ð»Ñ /food-packaging-machines/
Like if Google is translating the russian folder into a langage that he recognise
-
Add the site to Google Search Console and do "Fetch as Google" to see how they would index your pages.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
How do I treat URLs with bookmarks when migrating a site?
I'm migrating an old website into a new one, and have several pages that have bookmarks on them. Do I need to redirect those? or how should they be treated? For example, both https://www.tnscanada.ca/our-expertise.html and https://www.tnscanada.ca/our-expertise.html#auto resolve .
Intermediate & Advanced SEO | | NatalieB_Kantar0 -
410 or 301 after URL update?
Hi there, A site i'm working on atm has a thousand "not found" errors on google console (of course, I'm sure there are thousands more it's not showing us!). The issue is a lot of them seem to come from a URL change. Damage has been done, the URLs have been changed and I can't stop that... but as you can imagine, i'm keen to fix as many as humanly possible. I don't want to go mad with 301s - but for external links in, this seems like the best solution? On the other hand, Google is reading internal links that simply aren't there anymore. Is it better to hunt down the new page and 301-it anyway? OR should I 410 and grit my teeth while google crawls and recrawls it, warning me that this page really doesn't exist? Essentially I guess I'm asking, how many 301s are too many and will affect our DA? And what's the best solution for dealing with mass 404 errors - many of which aren't attached or linked to from any other pages anymore? Thanks for any insights 🙂
Intermediate & Advanced SEO | | Fubra0 -
How important is the file extension in the URL for images?
I know that descriptive image file names are important for SEO. But how important is it to include .png, .jpg, .gif (or whatever file extension) in the url path? i.e. https://example.com/images/golden-retriever vs. https://example.com/images/golden-retriever.jpg Furthermore, since you can set the filename in the Content-Disposition response header, is there any need to include the descriptive filename in the URL path? Since I'm pulling most of our images from a database, it'd be much simpler to not care about simulating a filename, and just reference an image id in my templates. Example: 1. Browser requests GET /images/123456
Intermediate & Advanced SEO | | dsbud
2. Server responds with image setting both Content-Disposition, and Link (canonical) headers Content-Disposition: inline; filename="golden-retriever"
Link: <https: 123456="" example.com="" images="">; rel="canonical"</https:>1 -
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
If I block a URL via the robots.txt - how long will it take for Google to stop indexing that URL?
Intermediate & Advanced SEO | | Gabriele_Layoutweb0 -
URL structure change and xml sitemap
At the end of April we changed the url structure of most of our pages and 301 redirected the old pages to the new ones. The xml sitemaps were also updated at that point to reflect the new url structure. Since then Google has not indexed the new urls from our xml sitemaps and I am unsure of why. We are at 4 weeks since the change, so I would have thought they would have indexed the pages by now. Any ideas on what I should check to make sure pages are indexed?
Intermediate & Advanced SEO | | ang0 -
Duplicate Titles caused by multiple variations of same URL
Hi. Can you please advise how I can overcome this issue. Moz.com crawle is indicating I have 100's of Duplicate Title tag errors. However this is caused because many URL's have been indexed multiple times in Google. For example. www.abc.com
Intermediate & Advanced SEO | | adhunna
www.abc.com/?b=123 www.abc.com/ www.abc.com/?b=654 www.abc.com/?b=875 www.abc.com/index.html What can I do to stop this issue being reported as duplictae Titles, as well as content? I was thinking maybe I can use Robots.txt to block various query string parameters. I'm Open to ideas and examples.0 -
Crazy long weird URLs... help
I have a HTML website, mysite1.com, and I placed a link on the home page to another one of my sites, mysite2.com Today I checked the links to mysite2.com in Majestic and noticed 24 links coming from the mysite1.com instead of just one link. The URLs from mysite1.com that are showing in Majestic are like this mysite1.com/?epl=4donafvFK3fMXxZXMWQRQLodmPchoXCK5C7-kbBv_agkwlkJrZAoaSDVUlhqFmUqt0f8c2Q6jF6GO6DNMnbidqRsikriF-IEBEt5okmICLEB0FxP36GrsxoPGQ3SGBo1PVR7itDUA4CYmjypn5gi mysite1.com,was inherited from a friend and I believe that it was originally built in Frontpage. Can you tell me how I can get rid of these multiple links as I only want 1 showing from the home page Thanks in advance
Intermediate & Advanced SEO | | JohnPeters0 -
Blocking Dynamic URLs with Robots.txt
Background: My e-commerce site uses a lot of layered navigation and sorting links. While this is great for users, it ends up in a lot of URL variations of the same page being crawled by Google. For example, a standard category page: www.mysite.com/widgets.html ...which uses a "Price" layered navigation sidebar to filter products based on price also produces the following URLs which link to the same page: http://www.mysite.com/widgets.html?price=1%2C250 http://www.mysite.com/widgets.html?price=2%2C250 http://www.mysite.com/widgets.html?price=3%2C250 As there are literally thousands of these URL variations being indexed, so I'd like to use Robots.txt to disallow these variations. Question: Is this a wise thing to do? Or does Google take into account layered navigation links by default, and I don't need to worry. To implement, I was going to do the following in Robots.txt: User-agent: * Disallow: /*? Disallow: /*= ....which would prevent any dynamic URL with a '?" or '=' from being indexed. Is there a better way to do this, or is this a good solution? Thank you!
Intermediate & Advanced SEO | | AndrewY1