Thanks
Posts made by Bio-RadAbs
-
RE: Is this correct?
Thanks, I realise the usage should be a correct relative URL or a correctly formed absolute URL. In Moz's case, they used a correctly formed absolute URL.
My question is more around...why not use "/"?
Cyto
-
Is this correct?
I noticed Moz using the following for its homepage
Is this best practice though? The reason I ask is that, I use and I've been reading this page by Google
http://googlewebmastercentral.blogspot.co.uk/2013/04/5-common-mistakes-with-relcanonical.html
5 common mistakes with rel=canonical
Mistake 2: Absolute URLs mistakenly written as relative URLs
The tag, like many HTML tags, accepts both relative and absolute URLs. Relative URLs include a path “relative” to the current page. For example, “images/cupcake.png” means “from the current directory go to the “images” subdirectory, then to cupcake.png.” Absolute URLs specify the full path—including the scheme like http://.
Specifying (a relative URL since there’s no “http://”) implies that the desired canonical URL is http://example.com/example.com/cupcake.html even though that is almost certainly not what was intended. In these cases, our algorithms may ignore the specified rel=canonical. Ultimately this means that whatever you had hoped to accomplish with this rel=canonical will not come to fruition.
-
RE: Moving to https and back to http, would it it hurt?
Matt has summed it up, but my question is..why don't you get a wildcard certificate?
It will make it much easier and if you ever move your website to a CDN network, an HTTPS website with a wildcard certificate makes the process smooth!
-
RE: Was hit with panda in 2012, what to do now?
Should I address this issue now or is it too late to matter?
- Never too late
How would i do it, a link audit through a freelancer?
- Depends on your budget and time. Rebecca's suggestion is good to use a tool like Link Detox and contact every webmaster. If you got the time to do it yourself, go for it..otherwise if you got the budget, hire a freelancer but check everything.
What do you SEO mozters recommend?
- Rebecca's post summed it up
-
RE: What are the top tips for winning on Google, Bing, and Yahoo?
- Webmaster tools. Google, Bing (Yahoo) have them. Get your sitemap loaded and site set up in their webmaster tools!
- High authority backlinks. How about a wikipedia page? Or start creating some rich content that users will find useful and share? Even social media metrics come into ranking.
- Make your site run HTTPS and use a CDN
- Images and video. Don't forget users also land on your site from images and videos tabs+
-
RE: Keyword Stuffing - Image Alt
What do you mean by "along with 25 other elements of the keyword.."
So your product title is automatically inserted as the alt tag of an image, so what are the other 25 elements? If it is just the title, sure I can understand and it should be fine - as long as the image is about the title.
Now if the image alt tag is stuffing other elements of a product as the img alt tag (such as other fields) then I would advice altering that automatic code.
Do it.
-
RE: Duplicate Content
All 3
- canonical-url-tag,
- changing page Meta;
- and Body text.
In addition to the body text, bring in city specific and state specific content for that category. Images, text and even cross selling to make the page unique.
-
RE: SEO before products on ecommerce site
From my experience, change always happens..
- People might request a change in URL structure (then it will lead to 301 redirects and setting them up)
- It's better to let Google crawl a rich page than a weak page. When it comes to ecommerce sites, you don't want "page not found" as the product hasn't been published but its category page is done. If the sitemap leads to broken pages - well, customers won't like it, so don't expect bots to do either.
- I would work on your ecommerce site, get it done in preview and all ready to go. You can also setup things like Google Webmaster Tools, Bing Webmaster tools, ensure your sitemap is ready, url structure is fine and get it ready.
So my advice is to set it up and then launch. You don't want to end up "fixing" your website post launch. Here's another way to look at it.. your credit score - it's easier to work on building a good credit score, than having to fix a bad one. SEO is the same.
-
RE: Indexing of Search Pages
Hmm, so what it comes down to is that, you can index search pages but provided they have a purpose or add value to the end user.
For instance, A user would search by category whereas an individual product search result isn't necessary when a product page exists.
Thanks Dirk for the links, helps a lot
Cyto
-
Indexing of Search Pages
I have a question on indexing search pages of an ecommerce or any website. I read Google doesn't recommend this and sites shouldn't allow indexing of their search pages.
I recently attended an SEO event (BrightonSEO) and one of the talks was on search pages and how big players like eBay, Amazon do index their search pages. In fact, it is a core part of the pages that are indexed.
eBay has to do it, as their product pages are on a time frame and Amazon only allows certain category search pages to be indexed. Reviewing my competitors, they are indexing search pages and this is why they have thousands and millions of web pages indexed.
What are your thoughts? I thought search pages were too dynamic (URL strings) and they wouldn't have a unique page title, meta description or rich content to act as a well optimised page.
Am I missing a trick here?
Cyto
-
RE: Https Loss of Search traffic
I would also set up your HTTPS site as a new one in Google Webmaster Tools and Bing Webmaster Tools (Bing automatically does it), but set it up in GWT. Once you have done that, make sure your sitemap is added and crawled again.
Get GWT to crawl the sitemap again from your non HTTPS account too, so redirects are recorded.
-
RE: How do I add https version of site to Bing webmaster tools?
Thanks Niners52.
I moved to HTTPS and was facing the same problem. Your answer has helped.
I wonder if there is a need to submit a https sitemap on the http bing. Did you think about this?
-
RE: Google Search Results...
The goal is to identify what pages are Google indexing and are there ones it shouldn't. (We don't index search pages, we don't index basket or checkout pages)
I do know know all of the subdomains and searching them individually isn't making up the total search count when I do site:company.com.
I don't have duplicate pages from my moz reports so it can't be that. If I was able to download a full google search result into a spreadsheet. I could quickly filter and see what pages are being indexed that shouldn't.
-
RE: Google Search Results...
My GA is only focused on a single domain, as subdomains hold just PDFs, images etc. Traffic reports from GA are focused on www.company.com pages.
The only way I can know exactly which URLS have been indexed, seems to be going through the google search results, but it caps after 7 pages
-
Google Search Results...
I'm trying to download every google search results for my company site:company.com. The limit I can get is 100. I tried using seoquake but I can only get to 100.
The reason for this? I would like to see what are the pages indexed. www pages, and subdomain pages should only make up 7,000 but search results are 23,000. I would like to see what the others are in the 23,000.
Any advice how to go about this? I can individually check subdomains site:www.company.com and site:static.company.com, but I don't know all the subdomains.
Anyone cracked this? I tried using a scrapper tool but it was only able to retrieve 200.
-
RE: High Temporary Redirects: Login required pages
It isn't defined in robots.txt, just google webmaster tools, bing, and Google Analytic settings.
I guess mentioning it in the robots.txt should allow ANY bot to ignore it.
Thanks Don
-
High Temporary Redirects: Login required pages
Noticed something interesting, a high temporary redirect report from Moz. Reviewing the pages they are caused by the user having to login and getting redirected.
I can see the returnto query in the URL too. My thoughts:
- Since a login is required and the user is being redirected, these should remain 302 and not 301.
- I tested my Google Analytics account to **Exclude URL Query Parameter **returnto, just to see if it affected traffic. It didn't, I mean I don't see urls duplicated with the parameter anymore, just grouped together, so traffic is still being counted.
I'm going to wait 1 more day and see what impact the GA traffic is before applying the exclusion to my true Google Analytics profile.
This got me thinking, I should probably exclude this parameter from Google and Bing Webmaster Tools, that way Google/bing won't read those urls.
Now does Moz's crawler follow that? Do you think that would change my moz crawl diagnostic report because I told Google/Bing crawlers to exclude that parameter.
What do you think of my approach to reduce these high temporary redirects reported by Moz? Will it work? Has it plagued you?
-
Breadcrumbs and Left hand menus....
I wanted to know some opinions on breadcrumbs and left hand menus on a page. Take a traditional two column page, you have the left hand navigation on the left, and page copy on the right.
The introduction of breadcrumbs serves two things
- Navigation for the user, shows structurally how the page fits
- SEO benefit - include breadcrumbs, use schema and you potentially provided more page links on a single search result.
- Thinking about this, isn't the left hand navigation and breadcrumbs the same? The left hand navigation shows where the page fits, which is precisely what the breadcrumbs do.
- You are doubling the links, left hand menu and breadcrumbs
- Sure you can get rid of the breadcrumbs, but from an SEO stand point, won't you lose the ability to have breadcrumbs via schema on search results
- Do you see breadcrumbs and menu differently, should breadcrumbs only exist on single column pages (say an ecommerce page) and left hand menu on two column pages like an article page?
Thoughts?
-
RE: Page Load Timings: How accurate is Google Analytics Data?
I'm interested to know answer to this question too. I'm about to introduce a CDN as our company is a global one and i'm hoping a CDN would reduce the page load time and thus reduce the average load time GA reports..
Curious why GA is recording worse for you.
-
RE: Can bots identify shmushed keywords?
Let's say they do, but does that mean you stand a better chance to rank higher than a competitor who uses hyphens? If hyphens represent spaces, and a user searches chanel number 5, doesn't that mean a website that has the url chanel-number-5.html might rank better than chanelnumber5.html.
Similar to a human, it would be easier to read chanel-number-5 than chanelnumber5.
So even if the bot reads a url without a hyphen, a competitor with a hyphen might edge out.
That's my two cents
-
RE: Double hyphen in URL - bad?
- Have you seen websites use double dashes?
A single dash has become norm and users read it and use it that way, but a double dash is outside the norm in my opinion and you could end up with a lot of incorrect backlinks. A customer might type the URL with a single dash and end up with a 404 page for instance.
-
RE: 2 different pages being shown as duplicate content.
Absolutely, as Don and Monica have raised, your unique content is outweighed by your "duplicate" content.
Get some more rich text. For instance, put yourself in the shoes of your customer and think about content that would make you buy the product with NO DOUBTS!
- Things like testimonials/reviews will aid the user
- Why not introduce some protocols or tips to make the content different?
- More images with optimised file names and alt tags (don't just name images 187041111-370_370.jpg!)
- How about a short description of the product on top under the title and then a descriptive unique one further into the page? It helps spiders crawl your keywords early, it sells the product in a instant to users and it enriches the page.
It's no different to real world. In the real world a user might do research before making the purchase instore, why not offer that research on your page itself to re-affirm this is the product you want and in turn make your page a more optimised one?
-
RE: Backlinking for small service oriented websites
When it comes to backlinking, I can offer a suggestion. Get a list of competitors and use opensiteexplorer. You will be able to see what websites your competitors have content on and are generating backlinks from.
Now go through that list and identify the top level domains and see if you can also produce a high quality content they would link to. A great inforgraphic, or top tip article, something users will find relevant and useful.
I would suggest that as a start, It's something I've started doing but the key in my mind is not to spam but build reputable links that resonate with your company.
-
RE: Not provided....
How the hell does a tool like similarweb pro work though, it says it will show you organic keywords that drive traffic.
Moz does offer a report I find useful, it's the landing page report. Search>>Landing Pages. You feed keywords and Moz will tell you which URL and how much traffic your keywords might be driving to that page.
It's a great way to find out what keywords might be driving traffic and then "optimize" your page for that keyword.
-
RE: Thinking about not indexing PDFs on a product page
Thanks EGOL, I didn't think about using rel=canonical on htaccess. Great idea
-
RE: Thinking about not indexing PDFs on a product page
I don't think see my PDFs show up for a search term when my HTML pages are being displayed.
However, there was a situation when a PDF was displayed and I created a HTML page of it and set up redirects from the PDF to the HTML page. I followed that up by reuploading the PDF as a new URL and offering to download. That way I transfered the rank juice to the HTML page.
In a nutshell, no I don't see my PDFs outranking my HTML pages, but I do know my PDFs are indexed and I don't know if they show up for a different search term.
I guess my main question is, would not indexing them open up the chance for more backlinks to your HTML page and not the PDF? And in Google's eyes, it won't debate over which to display, the HTML page or PDF as both have the same content.
Maybe I'm over thinking and the straight answer is, if a HTML page exists, Google won't give preference to the PDF but in the event there is no HTML, the PDF is shown
-
RE: Thinking about not indexing PDFs on a product page
Yeah, we offer the same. The user is able to download the PDF or have it open in a new window. I haven't seen Google automatically present my PDF and so far my searches have shown my HTML page, but my question to Cole remains, could Google be comparing the PDF and HTML page with each other? What if in a search situation it would prefer showing the PDF higher than the HTML page?
On your next question, I don't get duplicate warning for PDF. I believe the PDFs are indeed being indexed as the text is readable. How well are they being indexed? I've got close to 22,000 search results for my subdomain so yeah, they are indexed.
I do have rel-canonical tags on the HTML page, but can't appear it on the PDF as it's a file and not a page.
-
RE: Thinking about not indexing PDFs on a product page
Thanks for the replies
Cole - Google indexed our PDFs though. I tested this by doing a site:domain.com search term, and then a site:static.domain.com search term search.
Result:
site:static.domain.com search term
Google showed me the PDF document that is available for download from the HTML page that ranks high for that search term search.
So Google is indexing both the PDF and HTML. To answer your question as to why I don't want them indexed.. Well, my thinking was. If the PDF appears and if someone backlinks to it, I rather get that backlink to the HTML page. PDFs are hosted on my subdomain and I don't want the subdomain to get the rank. Back of my head, I'm also debating, whether my PDF and HTML are competing with each other?
-
Thinking about not indexing PDFs on a product page
Our product pages generate a PDF version of the page in a different layout. This is done for 2 reasons, it's been the standard across similar industries and to help customers print them when working with the product.
So there is a use when it comes to the customer but search? I've thought about this a lot and my thinking is why index the PDF at all? Only allow the HTML page to be indexed. The PDF files are in a subdomain, so I can easily no index them. The way I see it, I'm reducing duplicate content
On the flip side, it is hosted in a subdomain, so the PDF appearing when a HTML page doesn't, is another way of gaining real estate. If it appears with the HTML page, more estate coverage.
Anyone else done this? My knowledge tells me this could be a good thing, might even iron out any backlinks from being generated to the PDF and lead to more HTML backlinks
Can PDFs solely exist as a form of data accessible once on the page and not relevant to search engines. I find them a bane when they are on a subdomain.
-
RE: Schema.org on Product Page showing strange result if you post url in google
No problem, Moz can highlight duplicate content for you. Run a crawl diagnostic and review the report.
-
RE: Product Page on Eccomerce Site ranking very poorly - Unique Product description but duplicate content on other tabs.
- Can you reduce the duplicate content by turning it into a footer link? For instance, you could move the entire 'about the company' tab to a link in the footer that links to an about us page. I've seen plenty of sites that focus on product pages that link out to specific pages like 'terms and conditions, about' and only keep what is relevant to the customer on the page
- No need to spam pages with content for the sake of content. It's about placing content that is relevant. Quality over quantity.
- Let the unique product descriptions, titles, headers, images, image tags, seo friendly urls, breadcrumbs and a good sitemap help you structure the pages correctly. You will start seeing improvements.
- Throw in a valuable blog focused on your area that users will find useful to build quality external links and you will further benefit.
- Better yet, look at moving your site to HTTPS, Google recently announced they value sites that move to HTTPS. You have an e commerce site, it will help
-
RE: Schema.org on Product Page showing strange result if you post url in google
Sometimes google will change the description and title depending on the search term. I believe this is what is happening. You searched for the URL and it changed the meta description.
here's a screenshot of your page when I searched site:bestathire.co.uk 1648
As you can see, your meta description is appearing correctly. Google's Structured Testing tool is also showing the schema
What I also spotted on Google is duplicate content indexed in Google
- http://www.bestathire.co.uk/products/view/1648/Carpet_Cleaner_Domestic
- http://www.bestathire.co.uk/thanks/1648/8845/DIY_tools_equipment/Cleaning_equipment/Carpet_cleaners
Hope that helps. You can also tag your page using Google Webmaster tools >>Search Appearance>>Data Highlighter.
Hope that helps
-
RE: What are some tips to increase your CTR in search results?
Agree with Chris and Moosa's points. The following worked for me
- Schema
- Create meta descriptions that will encourage click throughs. For a product page, our meta description summed up what the product did and the ranges we offered. It led to more clicks.
-
RE: Google Analytics: Different stats for date range vs single month?
Google's pre-built filters are working fine for me (i.e. all sessions) and it's my bot filter when I view via a date range vs month only that is causing the confusion.
I got an answer from the google analytics technical page. There is a setting to exclude bot traffic, therefore avoiding the need to build custom segments.
You can do this for each profile so I've turned it on to test, let's see how my results turn out. If it removes the bot traffic automatically, then the date range wouldn't change. Here's how to do it:
Admin >> View Settings >> Bot Filtering (Click the Check Box) then the Save button. Done... no need for a dedicated segment.
-
RE: Google Analytics: Different stats for date range vs single month?
I didn't know about this. I always thought the a data range would show the same data as the small range.
Strange thing is, If I note down each month's sessions individually and then add them up, they closely match the total session count my data range gives. Its just that my month data within the range that is wrong.
-
Google Analytics: Different stats for date range vs single month?
I've been scratching my head, chin, and you name it over this one.
I have an advanced segment to remove bot traffic from my data. When I look at the Audience Overview data for a single month (let's say Aug). I am shown a session count.
No problems here, however If I set the date range to (January - August). The august monthly stats is incorrect, much lower. What this means is that, if I export a CSV report from Jan-Aug, the data is wrong compared to individually recording a month.
Anyone faced this? I've asked the question over at the Google Analytics technical section as well, but no answer
P.S I even used the 'control the number of sessions used to calculate this report' tool but no luck.
-
RE: URLs: Removing duplicate pages using anchor?
Thanks Everett,
- Rel="canonical" is in place, so that's covered
- The urls with the parameter are only accessible if you want to directly access a particular size. If you are on the default page and switch sizes from the dropdown, no URL change is presented.
- I have left webmaster to decide what should be crawled or not. The parameter has been mentioned though.
-
RE: URLs: Removing duplicate pages using anchor?
Thank you Celilcan2,
- I'll set it up as 'yes' and it 'narrows' the page
- What is the perk of doing this though? Will Google not count anything after the parameter as something or value, it would focus on just the single URL?
-
URLs: Removing duplicate pages using anchor?
I've been working on removing duplicate content on our website. There are tons of pages created based on size but the content is the same.
The solution was to create a page with 90% static content and 10% dynamic, that changed depending on the "size" Users can select the size from a dropdown box.
So instead of 10 URLs, I now have one URL.
- Users can access a specific size by adding an anchor to the end of the URL (?f=suze1, ?f=size2)
For e.g:
Old URLs.
- www.example.com/product-alpha-size1
- www.example.com/product-alpha-size2
- www.example.com/product-alpha-size3
- www.example.com/product-alpha-size4
- www.example.com/product-alpha-size5
New URLs
- www.example.com/product-alpha-size1
- www.example.com/product-alpha-size1?f=size2
- www.example.com/product-alpha-size1?f=size3
- www.example.com/product-alpha-size1?f=size4
- www.example.com/product-alpha-size1?f=size5
Do search engines read the anchor or drop them? Will the rank juice be transfered to just www.example.com/product-alpha-size1?
-
Chrome Add-on - see text version?
I used to have a chrome add-on to see the cached text version of a website. I lost it and was wondering if anyone knew an alternative or a good add-on
Cyto
-
RE: Moz Local | Empty page "Categories"
The page looks to have loaded now, it was indeed a speed issue. It is loading quick now.
Thanks for fixing!
-
Moz Local | Empty page "Categories"
Dear Moz,
Another error, the following url loads an empty page https://moz.com/local/categories
Please review
Thanks!
-
Moz Local | Download Template
Dear Moz
I've received your email about Moz Local. A fantastic tool but it does not allow you to download a template. Clicking 'Download this template' simply reloads the page.
I am testing it under incognito mode of Chrome with no add-ons
Thank you!
-
CSS Hidden DIVs - not collapsable content. Amber light?
I'm in the planning stage of a new ecommerce page. To reduce duplication issues, my page will be static with 20% of the page compiled of dynamic fields.
So when a user selects a size, or color, the dynamic fields are the only ones that change as the rest of the content is the same. I can keep a static URL and not worry about duplication issues. Focus can be on strengthening this single URL with rich schema, reviews, and backlinks.
We're going to cache a default page so for crawlers, the dynamic field doesn't appear empty. My developer said they can cache the page with all the variants of the dynamic fields, and use hidden DIVs to hide them from the user.
This way, the load speed can be high, and search engines might crawl those keywords too. I'm thinking about and going.."wait a minute, that's a good idea..but would a search engine think I am hidding content and give me a penalty?". The hidden content is relevant to the page and it only appears according to the drop down to make the user experience more "friendly".
What do you think? Use hidden DIV or use javascript to not allow bots to crawl the hidden data at all?
-
Static pages with dynamic content: Good for SEO?
I wanted to know ones thoughts on reducing duplication by creating a static page with a few dynamic fields. Has anyone done this?
If 75% of a page is static and 25% is dynamic, is that a good ratio? It's an idea I am thinking about to combat duplication issues affecting ecommerce pages. Some ecommerce sites generate a new page for a small change like size, but the content is the same. What if you could create a single static page and depending on the size chosen, only those fields connected to the size are dynamic? Everything else remains the same.
For caching purposes, you always submit a cached page with default values.
Wouldn't this work? Isn't this a solution for duplicate ecommerce pages? It would also help in ranking, rather than multiple external links across duplicate ecommerce pages, it would just be to a single page.
-
RE: Are we designers at heart?
My approach is thumbs up with a smile. Universal and makes everyone feel epic! Including myself