There are a lot of factors which could have caused this. Without a background of your last actions, links, changes... or at least the domain name, it is impossible to give any valid answer.
Too generic question, sorry
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
There are a lot of factors which could have caused this. Without a background of your last actions, links, changes... or at least the domain name, it is impossible to give any valid answer.
Too generic question, sorry
More than 3 competitors is a must nowadays, when using a paid PRO version this is by far one of the weakest limitations on your wonderful service.
When managing a popular ecommerce, coupon pages are a natural source of traffic. Those pages make some linkbuilding for you, although you don't submit coupons to them.
If you have other type of links apart from the coupon sites, I would not give too much importance to this issue. Just check if some of those sites could be seen as spammy to ask for a link removal.
That kind of behaviour is really strange. Are you always searching from the same computer, and always logged off -or at least always in the same Google Account? Seems like you are getting some kind of customized experience based on your geolocalization, browsing history, Google Account or something like that.
For checking your ranks, always use an external tool like MOZ.
The problem is that you are implementing a canonical from all those parametrized pages to http://mathematica-mpr.com/news/, and the content in that page -an empty result list- is not the same as in the original link you provide in your post -which show article results.
Canonical is used to show a preferred URL for a page, when there are two or more URLs which lead to the same content. After implementing the canonical, you have to make sure you are not linking to the "bad" url or it will never disappear from Google's SERP.
As your content is not the same on both pages, and the old URL is still accessible from your site, you are not fullfill the requirements to make those URL disappear.
I think that if you don't want those index to appear on the SERPs as depending on the search options it will lead to very common pages, and it is almost impossible to identify clearly a canonical reference for them, the best option is to add to robots.txt a line which blocks those parameters, and you will end with that problems.
Disallow: /news/?facet=*
This will remove your search results from the index, ending with duplication problems. Please make sure this is what you want, or if you prefer to keep those results although you continue having duplication issues.
As long as you use valid HTML, use proper tags for titles and every section, and don't make the HTML size grow too much, it won't damage your rankings.
Hello.
This is a common problem for us working on ecommerce. Every filter, ordenation widget and any other options you give the customer to browse your site on a more comfortable way, becomes a pain in the ass with the duplicated content for search engines.
Apart from implementing all the canonical tags as you say you did, you could also take a look on removing as much parameters as you can in Webmaster Tools (specially those dealing with ordenation or number of items shown). At last, you should also decide if you want to avoid bots from indexing those special filter combinations so that you can focus on the categories page. You give the example of pagination, but also filtering by manufacturer. The option we take, and which I think is the better, is to add a "noindex" meta tag to those kind of pages, and only indexing the main page in each category. If we think a filter is important as a keyword (for example, "adidas soccer boots" for category "soccer boots" and brand filter adidas), what we do is create an special description for that page so that it is no longer duplicate content: if we are not able to manually create that description, we just add the noindex tag as I said before and we forget about that page on search engines.
If it is a personal blog, it is a correct way of implementing those meta tags, as publisher and author is the same. If it is a site with several authors, I would change the publisher TAG so that it starts pointing your Google+ page, and the author tag pointing to your profiles.
So you will benefit of both tags.
Removing them from sitemap will not make them disappear from Google Index. A sitemap is a tool which allows the spider to discover new pages, but one indexed they won't disappear from the index just for removing them.
If you don't want them to be indexed, you can remove then using Google Search Console, and going to "Optimization"/"Remove URLs". It is faster than including the noindex metatag.
If they contain just a link as in your example, I would remove them without any doubt.
Which URLs do you include in your sitemap? Could you check if you try to index
https://www.zenory.com.au/psychic-readings/psychic-readings or https://www.zenory.com.au/psychic-readings ?
The first one is the URL you link at menus, but it has a 301 redirect to the second URL format (and the same for the rest of main options). That is quite a bad idea. Please make sure you include the correct address on the sitemap and not the 301 redirect one. That could be causing the problem of Google Webmaster Tools not showing that page in your sitemap as indexed, as although final page is properly indexed in google (as you can check if you look up for site:www.zenory.com.au), GWT is not able to match both addresses.
There is a limit of redirections which Google and the browsers follow in chain: you cannot create a loop, or make non-sense continuous 301 redirections.
But there is not a limit on the number of pages you redirect: in any redesign, all your similar pages (for example, product pages), will create a redirection. 600 is not a big number of products for an ecommerce site, or a big number or posts for a blog... Don't worry.
Matt Cutts speaks exactly about your doubt here.
Hello.
Just create a file called .htaccess on your root with these two lines of code:
RewriteCond %{THE_REQUEST} ^./default.asp
RewriteRule ^(.)default.asp$ /$1 [R=301,L]
Also, use a code editor to look on your code for appearances of "default.asp" to make sure you are not linking to that full address, as that link could be the cause of Google indexing that wrong URL.
If you are not comfortable with .htaccess and redirections, you could also implement a canonical meta tag to indicate your preferred URL for that page.
Hello.
Before starting from scratch, try to optimize Drupal. You have some simple things to do which speed Drupal amazingly:
Try if it helps while you find the source of the problem.
If your website is not related to Pay Day Loans, you should do something:
First, visit the origin of those links. Are they automated comments on blogs, with strange messages not related to the post or your website? By the anchor of your links, I would bet you have that profile of incoming links. If it is the case, you should definitely do something with those links. Otherwise, if you discover they are interesting content, in context links and something really worthy, you can let it be.
If you detect this way they are spam, you should first contact the webmaster of that site. Maybe he is not conscious about having those links, and they are probably caused by some kind of bug, or a form open to visitors. If he is a webmaster concerned about SEO, will remove the links soon.
If you don't obtain a reply from the webmaster, and only in that case, should you use the disavow tool: when you are not able to have those links removed by normal ways, and your last option is to inform Google you don't want them to have those links taken into consideration by their algorithm when ranking your site.
Good luck with doing that manual stuff on the 200 links
Hello.
I would include a different description and title for every section to avoid them as being duplicates if you only list the events.
Then, for the navigation on each section, you have three good options:
Any of those ways would help quite a lot to Google so that it can understand what is happening on your site.
I hope it helps you.
Best option in my opinion is to combine "Fresh web explorer" from Moz and Google Alerts.
You can receive daily updates about any keyword.
Search engines are getting good in identifying common problems like this, but it is in fact a duplicate content issue. By the low cost of redirecting one of those options to the other, or implementing a canonical tag, I would not risk to be detected as duplicate.
Also, using always the same notation will benefit you to concentrate links to one page, as any incoming link will reach directly to the correct address. If you randomly use both versions of the URL and both return content, visitors will copy the link and you will end with links pointing to both of them, damaging your linkbuilding.
What option is best? It does not mind. Usually for users it is "cleaner" to see no trailing slash, as it is interpreted as visiting a document and not a folder. But any of them is perfect.
It is a technical issue with your ecommerce platform. Definitely it is not good to have that kind of different URLs.
Canonicals are helpful with pages where you cannot do anything but having two similar pages on your site, or when there are almost identical pages. But when dealing with such an important page on an Internet project like the product page on a ecommerce site, you should definitely take action and manage to have unique URLs for every product, not depending of the path the visitor follows to reach that page.
It will become difficult to measure conversion rates or any other KPI on Analytics, and also will become a problem in SEO, with so many different pages to link.
Maybe you could index your galleries, which show the small thumbnail so that it does not weight those 7mb you talk about, and link with the a href to the full image size.
Other option is to keep working as you do, and manually insert a title and a small description for each image page. This would definitely improve your SEO for those images, but obviously it is a manual work which I don't know if you will be able to do depending on the volume of images you process.
I cannot know your exact case as you don't provide an URL, but I had a similar issue some weeks ago, which I have solved. For pages which are strongly internally linked in your site, Google usually shows the most common anchor text of your internal links pointing to that page, instead of the HTML title tag.
We changed the anchor text in our main menu and soon it changed on SERPs.
Could it be your case? Do you have that page linked as "Isagenix Australia" in your menu, footer or any other place in your site structure, or a lot of external links with that anchor?
Hello.
Don't panic: people won't see those results. You probably saw it because you have Evernote Clipper in your PC. This is a function of the Web Clipper Chrome Extension preferences. You saw this format of SERP.
To avoid seeing these results on your Chrome view, uncheck the box next to "When enabled, searching the web on supported search engines will also be performed on your Evernote account."
If that redirect works (check not only the frontpage, but also internal pages), and you still see the three kind of results, you should implement the canonical tag on your site to make sure it is detected as the same page not depending on call URLs.
It is not a must nowadays. As soon as you get some external links, bots will be able to start indexing them. But, sincerely, creating an automatic sitemap is an easy task, and for sure it speeds up the process of all the site appearing on the search engines, specially the deepest pages. Also, it is a way to index areas which are not accesible via links.
So, it is not the main goal in SEO, but is still recommended, specially for launching of new sites.
Basically every text into your pages is the same, except some small numbers which in proportion represent a really small amount of text, and some meta tags.
You should make a mix between the standard template and some kind of database information for each neighbourhood. For example, inserting a small description of the area, or visitor comments. If you look for something more automatic than a description, maybe you could query some kind of webservice which could allow you to show the most important streets in the district: that would create some different text without manual work for every page.
The only way of avoiding that duplicate content is to in fact have different content :(.
A lot of directories, specially those based on old scripts, have issues with the regex which checks if the URL is valid, and forces the address to start with http:// instead of accepting https:// . It is a problem with the directory, and not of your website.
You should make sure you have a redirection from your http to your https version, and then post the http URL to the directory causing that failure. Please make sure website is loading with http via a redirection, and not serving the content directly, or you will have a serious duplication issue.