As long as you use valid HTML, use proper tags for titles and every section, and don't make the HTML size grow too much, it won't damage your rankings.
Posts made by hectormainar
-
RE: Will obfuscating HTML have a bad effect on my ranking?
-
RE: My Evernote Notes showing up on Google Search page ?
Hello.
Don't panic: people won't see those results. You probably saw it because you have Evernote Clipper in your PC. This is a function of the Web Clipper Chrome Extension preferences. You saw this format of SERP.
To avoid seeing these results on your Chrome view, uncheck the box next to "When enabled, searching the web on supported search engines will also be performed on your Evernote account."
-
RE: Is it possible to rank a RE-DIRECT in Google ?
Is it a 301 redirect? You won't rank that page as all the rank will flow to destination page.
-
RE: Unfamiliar Meta Description Tags
As far as you also include the name="description" and its value in the "content" attribute, having an ID on a Meta Tag won't probably avoid it from being correctly read by Google althought it is not W3C compliant. According to HTML specification, the meta element cannot have that ID attribute: The only valid attributes for meta tags are:
- name = name [CS]
This attribute identifies a property name. This specification does not list legal values for this attribute. - content = cdata [CS]
This attribute specifies a property's value. This specification does not list legal values for this attribute. - scheme = cdata [CS]
This attribute names a scheme to be used to interpret the property's value (see the section on profiles for details). - http-equiv = name [CI]
This attribute may be used in place of the name attribute. HTTP servers use this attribute to gather information for HTTP response message headers.
Also, you can include:
lang (language information), dir (text direction)Do you know why do you include that ID and how to remove it if it has no use?
- name = name [CS]
-
RE: Seeing lots of 0 seconds session duration from AdWords clicks
Another cause which is specially common in display networks -not too much in adwords- is that some of the pages which are serving your ads are showing your site with popunder scripts. This allows the site to create the affiliation cookie without the user noticing it.
In your case, if it is based only in 17 visits can only be the natural reason: people are not interested in your content for that keyword and leave too soon, as Ryan says.
-
RE: Links from coupon sites
When managing a popular ecommerce, coupon pages are a natural source of traffic. Those pages make some linkbuilding for you, although you don't submit coupons to them.
If you have other type of links apart from the coupon sites, I would not give too much importance to this issue. Just check if some of those sites could be seen as spammy to ask for a link removal.
-
RE: Galleries and duplicate content
Maybe you could index your galleries, which show the small thumbnail so that it does not weight those 7mb you talk about, and link with the a href to the full image size.
Other option is to keep working as you do, and manually insert a title and a small description for each image page. This would definitely improve your SEO for those images, but obviously it is a manual work which I don't know if you will be able to do depending on the volume of images you process.
-
RE: URL path randomly changing
It is a technical issue with your ecommerce platform. Definitely it is not good to have that kind of different URLs.
Canonicals are helpful with pages where you cannot do anything but having two similar pages on your site, or when there are almost identical pages. But when dealing with such an important page on an Internet project like the product page on a ecommerce site, you should definitely take action and manage to have unique URLs for every product, not depending of the path the visitor follows to reach that page.
It will become difficult to measure conversion rates or any other KPI on Analytics, and also will become a problem in SEO, with so many different pages to link.
-
RE: How do you check what links there are to a specific page on a site?
You just have to type link:http://www.yourdomain.com/whateverpageyouwanttocheck into Google's Search Box and you will get a list of pages which link to that page.
-
RE: Server response time: restructure the site or create the new one? SEO opinions needed.
Hello.
Before starting from scratch, try to optimize Drupal. You have some simple things to do which speed Drupal amazingly:
- Go to Administer » Site configuration » Performance page, enable the option ""Aggregate and compress CSS files." and "Aggregate Javascript Files".
- On the same page, activate the cache: "Cache page for anonymous users" and "Cache blocks".
Try if it helps while you find the source of the problem.
-
RE: I am trying to better understand solving the duplicate content issues highlighted in your recent crawl report of our site - www.thehomesites.com.
Basically every text into your pages is the same, except some small numbers which in proportion represent a really small amount of text, and some meta tags.
You should make a mix between the standard template and some kind of database information for each neighbourhood. For example, inserting a small description of the area, or visitor comments. If you look for something more automatic than a description, maybe you could query some kind of webservice which could allow you to show the most important streets in the district: that would create some different text without manual work for every page.
The only way of avoiding that duplicate content is to in fact have different content :(.
-
RE: What's a good way to find recent blog posts about a given topic?
Best option in my opinion is to combine "Fresh web explorer" from Moz and Google Alerts.
You can receive daily updates about any keyword.
-
RE: I'm invisible!
Without looking too deep into the site, I see at hrefs.com you only have 10 referring domains, and only 7 of them are not "nofollow".You should focus on a linkbuilding strategy at first.
Also, minimize and join your .js files. You load way too much external files for a site like that.
Why do you block product pages? They are the most importante pages on an ecommerce site.
-
RE: Why my website disappears for the keywords ranked, then reappears and so on?
That kind of behaviour is really strange. Are you always searching from the same computer, and always logged off -or at least always in the same Google Account? Seems like you are getting some kind of customized experience based on your geolocalization, browsing history, Google Account or something like that.
For checking your ranks, always use an external tool like MOZ.
-
RE: Alt text from an external site
You should find an equilibrum here.
Best option is to have some powerful keywords on the link anchor, or in this case as the image ALT. Having these kind of external links is really valuable. But not over-optimize it and use the same text in every external site linking you, as normal user behaviour is to link using your brand.
-
RE: Seeing URLS indexed that we don't want how do we approach this?
Removing them from sitemap will not make them disappear from Google Index. A sitemap is a tool which allows the spider to discover new pages, but one indexed they won't disappear from the index just for removing them.
If you don't want them to be indexed, you can remove then using Google Search Console, and going to "Optimization"/"Remove URLs". It is faster than including the noindex metatag.
If they contain just a link as in your example, I would remove them without any doubt.
-
RE: API Limit Question
Another question he launches is "When does the limit reset?" I have always have that doubt when reading the documentation. Is it monthly?
-
RE: How to 301 redirect, without access to .htaccess and to a new domain
If you don't have access to .htacess, but you have access to your own code, you still can execute the redirection via PHP, with this code:
header("HTTP/1.1 301 Moved Permanently"); header("Location: http://www.yournewwebsite.com/yournewdocumenturl"); ?>
I would try whatever possible to redirect individual pages and not the whole site to the root domain, not just for the visitors but also for transfering your rankings to the new URLs for each page.
-
RE: Does a / at the end of a URL create a duplicate page?
Search engines are getting good in identifying common problems like this, but it is in fact a duplicate content issue. By the low cost of redirecting one of those options to the other, or implementing a canonical tag, I would not risk to be detected as duplicate.
Also, using always the same notation will benefit you to concentrate links to one page, as any incoming link will reach directly to the correct address. If you randomly use both versions of the URL and both return content, visitors will copy the link and you will end with links pointing to both of them, damaging your linkbuilding.
What option is best? It does not mind. Usually for users it is "cleaner" to see no trailing slash, as it is interpreted as visiting a document and not a folder. But any of them is perfect.
-
RE: Max Number of 301 Redirections?
There is a limit of redirections which Google and the browsers follow in chain: you cannot create a loop, or make non-sense continuous 301 redirections.
But there is not a limit on the number of pages you redirect: in any redesign, all your similar pages (for example, product pages), will create a redirection. 600 is not a big number of products for an ecommerce site, or a big number or posts for a blog... Don't worry.
Matt Cutts speaks exactly about your doubt here.
-
RE: Duplicate content issue with pages that have navigation
Hello.
I would include a different description and title for every section to avoid them as being duplicates if you only list the events.
Then, for the navigation on each section, you have three good options:
- Remove every page except the first from search engines using robots.txt
- Create a "Show all" page, linked from menus, and put canonicals on each pagination pointing to the full listing.
- Implement the link rel="next" and rel="prev" meta tags to help Google interpreting it is a pagination.
Any of those ways would help quite a lot to Google so that it can understand what is happening on your site.
I hope it helps you.
-
RE: Should I use the Google disavow tool?
If your website is not related to Pay Day Loans, you should do something:
First, visit the origin of those links. Are they automated comments on blogs, with strange messages not related to the post or your website? By the anchor of your links, I would bet you have that profile of incoming links. If it is the case, you should definitely do something with those links. Otherwise, if you discover they are interesting content, in context links and something really worthy, you can let it be.
If you detect this way they are spam, you should first contact the webmaster of that site. Maybe he is not conscious about having those links, and they are probably caused by some kind of bug, or a form open to visitors. If he is a webmaster concerned about SEO, will remove the links soon.
If you don't obtain a reply from the webmaster, and only in that case, should you use the disavow tool: when you are not able to have those links removed by normal ways, and your last option is to inform Google you don't want them to have those links taken into consideration by their algorithm when ranking your site.
Good luck with doing that manual stuff on the 200 links
-
RE: Lost Rankings Late April Even Though We Have A Mobile Site
There are a lot of factors which could have caused this. Without a background of your last actions, links, changes... or at least the domain name, it is impossible to give any valid answer.
Too generic question, sorry
-
RE: Low Index: 72 pages submitted and only 1 Indexed?
Which URLs do you include in your sitemap? Could you check if you try to index
https://www.zenory.com.au/psychic-readings/psychic-readings or https://www.zenory.com.au/psychic-readings ?The first one is the URL you link at menus, but it has a 301 redirect to the second URL format (and the same for the rest of main options). That is quite a bad idea. Please make sure you include the correct address on the sitemap and not the 301 redirect one. That could be causing the problem of Google Webmaster Tools not showing that page in your sitemap as indexed, as although final page is properly indexed in google (as you can check if you look up for site:www.zenory.com.au), GWT is not able to match both addresses.
-
RE: HTTPS websites in Directories
A lot of directories, specially those based on old scripts, have issues with the regex which checks if the URL is valid, and forces the address to start with http:// instead of accepting https:// . It is a problem with the directory, and not of your website.
You should make sure you have a redirection from your http to your https version, and then post the http URL to the directory causing that failure. Please make sure website is loading with http via a redirection, and not serving the content directly, or you will have a serious duplication issue.
-
RE: Is this mistake in use of 'publisher' meta tag
If it is a personal blog, it is a correct way of implementing those meta tags, as publisher and author is the same. If it is a site with several authors, I would change the publisher TAG so that it starts pointing your Google+ page, and the author tag pointing to your profiles.
So you will benefit of both tags.
-
RE: Google Website title not showing correctly
I cannot know your exact case as you don't provide an URL, but I had a similar issue some weeks ago, which I have solved. For pages which are strongly internally linked in your site, Google usually shows the most common anchor text of your internal links pointing to that page, instead of the HTML title tag.
We changed the anchor text in our main menu and soon it changed on SERPs.
Could it be your case? Do you have that page linked as "Isagenix Australia" in your menu, footer or any other place in your site structure, or a lot of external links with that anchor?
-
RE: Duplicate Content errors - not going away with canonical
The problem is that you are implementing a canonical from all those parametrized pages to http://mathematica-mpr.com/news/, and the content in that page -an empty result list- is not the same as in the original link you provide in your post -which show article results.
Canonical is used to show a preferred URL for a page, when there are two or more URLs which lead to the same content. After implementing the canonical, you have to make sure you are not linking to the "bad" url or it will never disappear from Google's SERP.
As your content is not the same on both pages, and the old URL is still accessible from your site, you are not fullfill the requirements to make those URL disappear.
I think that if you don't want those index to appear on the SERPs as depending on the search options it will lead to very common pages, and it is almost impossible to identify clearly a canonical reference for them, the best option is to add to robots.txt a line which blocks those parameters, and you will end with that problems.
Disallow: /news/?facet=*
This will remove your search results from the index, ending with duplication problems. Please make sure this is what you want, or if you prefer to keep those results although you continue having duplication issues.
-
RE: Duplicate Meta Titles and Descriptions Issue in Google Webmaster Tool
Hello.
This is a common problem for us working on ecommerce. Every filter, ordenation widget and any other options you give the customer to browse your site on a more comfortable way, becomes a pain in the ass with the duplicated content for search engines.
Apart from implementing all the canonical tags as you say you did, you could also take a look on removing as much parameters as you can in Webmaster Tools (specially those dealing with ordenation or number of items shown). At last, you should also decide if you want to avoid bots from indexing those special filter combinations so that you can focus on the categories page. You give the example of pagination, but also filtering by manufacturer. The option we take, and which I think is the better, is to add a "noindex" meta tag to those kind of pages, and only indexing the main page in each category. If we think a filter is important as a keyword (for example, "adidas soccer boots" for category "soccer boots" and brand filter adidas), what we do is create an special description for that page so that it is no longer duplicate content: if we are not able to manually create that description, we just add the noindex tag as I said before and we forget about that page on search engines.
-
RE: Does sitemap auto generation help in SEO 2015
It is not a must nowadays. As soon as you get some external links, bots will be able to start indexing them. But, sincerely, creating an automatic sitemap is an easy task, and for sure it speeds up the process of all the site appearing on the search engines, specially the deepest pages. Also, it is a way to index areas which are not accesible via links.
So, it is not the main goal in SEO, but is still recommended, specially for launching of new sites.
-
RE: Gets traffic on both domain.dk and domain.dk/default.asp
Hello.
Just create a file called .htaccess on your root with these two lines of code:
RewriteCond %{THE_REQUEST} ^./default.asp
RewriteRule ^(.)default.asp$ /$1 [R=301,L]Also, use a code editor to look on your code for appearances of "default.asp" to make sure you are not linking to that full address, as that link could be the cause of Google indexing that wrong URL.
If you are not comfortable with .htaccess and redirections, you could also implement a canonical meta tag to indicate your preferred URL for that page.
-
RE: How best to roll out updated website to new responsive layout
In fact, it would be really strange for users to use a site on which design changes from one page to another. Thinking about crawling bots, having a different menu structure could lead to a misinterpretation of what is happening on the site.
I would focus on keeping the same URLs after the website redesign, and make sure to create proper redirections if any site has to change.
Also, it is easier to put into production the whole site at once and avoid testing small change after small change. So, as mobilegeddon hasn't been so impressive as everybody thought, I think your position is more correct than your partner's.
-
RE: Backlink Report Totals Discrepancy
If that redirect works (check not only the frontpage, but also internal pages), and you still see the three kind of results, you should implement the canonical tag on your site to make sure it is detected as the same page not depending on call URLs.
-
RE: Can I add more then 3 competitors do my SEO moz pro account?
More than 3 competitors is a must nowadays, when using a paid PRO version this is by far one of the weakest limitations on your wonderful service.