Yes, I understand that much. But in the actual report from MOZ, if you had any 4xx error it will tell you in the report the exact URL that returned the error. If there was no URL listed then you had no URLs found returning that error.
Posts made by MikeRoberts
-
RE: Receiving 4XX status codes
-
RE: Receiving 4XX status codes
Where did the original report say the error was coming from? Does the originating URL return a 4xx error if you go to it manually?
-
RE: Receiving 4XX status codes
Normally the reports will let you know what page the error was encountered on and/or originated from. If there's an error listed there then odds are either the crawl found a broken link within your site that returned a 404 Not Found or you have something that was blocked returning a 403 Forbidden. Have you run any other crawls of the site (such as with Screaming Frog) or checked Google Search Console to see if there are any crawl errors listed on the site?
-
RE: Google displaying SERP link in Japanese
I tried doing searches from a simulated location in Red Bank and I'm not seeing Japanese characters anywhere.
-
RE: Does social media presence + inbound links outweigh bad SEO?
There are numerous factors that could be doing this. Have they been around longer? Do they have any .Gov or .Edu backlinks? Are the tons of links that they have all from relevant sources? Do they have a better array of linking domains than your site does? Do they appear to be driving a lot of referral traffic from social media to their site? Are all those duplicate content pages you see actually created by parameters but canonicalized to the proper pages? Has your site ever been impacted by an algorithmic or manual penalty? Do your links come from relevant sources? Are you over-targeting your pages to keywords at the detriment of user experience? Is the anchor text of your backlinks all hyper-targeted or is the distribution more natural? And 404s aren't necessarily a bad thing if they're relevant 404s.
-
RE: How important is a keyword rich domain name for ranking?
There are tons of reasons they could be ranking higher for that specific search term. Sadly, Google has never handed out a cheat sheet of what percentage of the algorithm is affected by what and to what amount are certain things weighted for or against a site with respects to rankings.
Maybe their few links are .Gov and .Edu sites, maybe they're incredible for just that term and lackluster everywhere else, maybe other relevancy signals throughout their site lends more credence to that page than you would assume, maybe they better bounce rate & return traffic that is signalling the page is an authority page for the subject, maybe they're doing something underhanded and didn't get hit with a penalty yet. Maybe your content is not as useful despite the links, maybe the links to your page are passing less equity, maybe you've overused the term and it looks like keyword stuffing.
It could be any number of things. The important bit is not trying to move up one or two spots in order to "takedown" a random competitor. The important thing is making your site user friendly, informative, providing the best service you can, and hitting whatever your goals are whether that is sales, newsletter signups, filling out a contact form, or just seeing increases in return traffic & time on page.
Exact match domains aren't as useful as they may have once been. I've always found having a good, memorable "brand" name better. Much easier for people to remember you and come back if your site is Apple as opposed to MacIntosh-Personal-Laptop-Computers.com even if that would hit _some _target terms.
-
RE: Description in the snippet varies from search to search.. Have you seen that?
Well, yes and no. Because of how varied any relevant search could be, you can't necessarily hit every single variation to a good enough extent that Google's algorithm would then never change your description. You _could s_tuff literally every piece of info into the meta. Or you could write more naturally and make sure all the proper meta and/or schema and/or other tags are implemented properly. You just need to make sure you are hitting your targets for descriptions, that they are good for the user experience, and that they are not stuffed or spammy.
In the previous example, you could have updated your description to mention "Seller of Red Widgets & Green Widgets!" and the original [green widget] searcher might now be seeing your actual description in the SERPs. But another [green widget] searcher just spent the earlier part of their day Googling [widget coupons], [widget sales], [coupon sites], [internet coupon pages], [groupon], and/or [widget deals]. Now, after an hour of looking for coupons they search [green widgets] and you'd think your description would show up like it did for the other searcher but instead Google notices that you have schema tags for an upcoming sale listed on Green Widgets that was nestled somewhere halfway towards the bottom of your page... and all of a sudden your description is algorithmically changed to include that Green Widget Discount info even though it wasn't in your description. But the page was relevant and contextual information lead to an improved description. (this though is a hypothetical best case scenario, its not always that amazing and contextual... sometimes its just Google randomly truncating a sentence cause they feel the middle of the paragraph is most relevant)
-
RE: Description in the snippet varies from search to search.. Have you seen that?
Yup, Google has been doing this for a few years. The same page can have differing descriptions depending on what the search was. Usually this happens in the case of a page that is relevant to a search with a description that doesn't necessarily appear to be relevant enough to said search.
This doesn't make meta descriptions useless though. Meta description is still a useful signal to denote what your page is about. But for arguments sake, lets say you have a page on Widgets (why is it always widgets?). You have a page on Widgets and your main product is Red Widgets... so your meta description expresses "Seller of top quality Red Widgets!". But someone searches for Green Widgets. Your page is relevant for Green Widgets because you sell those too but your description doesn't mention them since they're a less important product to you. So Google alters your description for that search which pulls some of the info on your page about Green Widgets so that the Green Widget searcher know your page is actually useful to them.
-
RE: Include or exclude noindex urls in sitemap?
That opens up other potential restrictions to getting this done quickly and easily. I wouldn't consider it best practices to create what is essentially a spam page full of internal links and Googlebot will likely not crawl all 4000 links if you have them all there. So now you'd be talking about maybe making 20 or so thin, spammy looking pages of 200+ internal links to hopefully fix the issue.
The quick, easy sounding options are not often the best option. Considering you're doing all of this in an attempt to fix issues that arose due to an algorithmic penalty, I'd suggest trying to follow best practices for making these changes. It might not be easy but it'll lessen your chances of having done a quick fix that might be the cause, or part of, a future penalty.
So if Fetch As won't work for you (considering lack of manpower to manually fetch 4000 pages), the sitemap.xml option might be the better choice for you.
-
RE: Include or exclude noindex urls in sitemap?
You could technically add them to the sitemap.xml in the hopes that this will get them noticed faster but the sitemap is commonly used for the things you want Google to crawl and index. Plus, placing them in the sitemap does not guarantee Google is going to get around to crawling your change or those specific pages. Technically speaking, doing nothing and jut waiting is equally as valid. Google will recrawl your site at some point. Sitemap.xml only helps if Google is crawling you to see it. Fetch As makes Google see your page as it is now which is like forcing part of a crawl. So technically Fetch As will be the more reliable, quicker choice though it will be more labor-intensive. If you don't have the man-hours to do a project like that at the moment, then waiting or using the Sitemap could work for you. Google even suggests using Fetch As for urls you want them to see that you have blocked with meta tags: https://support.google.com/webmasters/answer/93710?hl=en&ref_topic=4598466
-
RE: Include or exclude noindex urls in sitemap?
If the pages are in the index and you've recently added a NoIndex tag with the express purpose of getting them removed from the index, you may be better served doing crawl requests in Search Console of the pages in question.
-
RE: 301 redirects
Having too many 301 redirects in a chain can have a non-positive impact. I.e. Don't 301 a page to another page that 301s to another page that 301s to another page, etc. etc. etc. Google once stated they could do 5 pages in a 301 chain before giving up. But honestly, why would you choose to redirect to a redirecting page when you could point it at something much more relevant? But as for having a bulk of 301s, I wouldn't worry. If you had 300 different pages that were all being redirected to 300 other pages, google would not devalue you for it. If your redirects are relevant and are good for the user experience, then you're fine.
-
RE: What is the radius for local search results
For a local shop that has their Google My Business set to indicate they serve customers at their location (as opposed to a delivery radius or areas served), Google will base the businesses shown in a search on either the Location the user specifies or will base them on what they know about the given businesses of the area in relation to the query plus any geolocation information of the user in question. So there isn't exactly a radius bubble that you would need to fall into for those specific kinds of situations.
Now, for an industry like landscaping, cleaning companies, food delivery, emergency auto repair, tow trucks, etc. They can set a radius they serve within or they can set specific areas. So a locksmith might set a handful of postal codes as the regions they will drive to in order to fix your locks. While a Pizza Delivery Service might choose to set a radius of 25km for their service area because they might not be able to reliably deliver outside that.
All of these things can be set up in their Google My Business account.
I know from personal experience that Google will show me things easily 100 miles away from my location if there is nothing in between that fits my search.
-
RE: Affiliate links and parameters creating duplicate page titles
How long ago did you implement canonicals to fix the issue? Its important to remember that a lot of SEO isn't about quick fixes. Some of these things will take weeks/months to completely filter through and for you to see the outcomes you were looking for.
You could "NoIndex, Follow" the pages if you really wanted to but I'd suggest waiting a bit longer for the canonicals to start working. If the canonicals really don't seem to be working for some/all of those pages, and you don't want to give it more time after a few weeks, then you could reasonably "NoIndex, Follow" the pages... but you could be harming link equity that would have been attributed to the canon page.
As for the 301 idea, it would really depend upon what the filtering parameters are doing. If its filtering out sections of a page for users or its helping you determining lead attribution in analytics, then I wouldn't want to indiscriminately 301 them. If they're being autogenerated but have canonicals in them from the start then you should be fine. Like above though, if after a few weeks of them existing with canonicals Google still is throwing duplicate errors at you for those specific pages then you can NonIndex them if you feel it is necessary.
-
RE: Affiliate links and parameters creating duplicate page titles
Its important to remember that a Canonical is a suggestion not a directive. So making those canonicals is the right way to do it but the search engines and crawlers will determine at their leisure whether they believe the canonical is right or not. Sometimes its a quick fix, and some times they don't accept the canonical at all. If the pages are exact duplicates then Google will get the picture eventually and start recognizing it. As for the Moz crawler, I haven't had a paid account in a while so I can't speak to the way the system works currently but I do know that it used to be an issue where the Moz crawler would never seem to properly recognize a canonicalized duplicate... so that may still be the issue,
-
RE: Handling redirects when 2 companies merge
Based on my limited experience of this type of situation, I've felt a good start is to Canonical each page on the old domain to its counterpart on the new domain and place a call to action on the old sites letting people know of the move and/or branding change. Submit for crawl request, let the bots begin filtering through the changes without affecting user experience yet. Then after some time, so your regulars have become acclimated to the upcoming changes and so have the bots, you can 301 those pages from the old site to the relevant counterpart they were already canonicalized to.
-
RE: Not Ranking - Any Tips?
Its only been a month and its a moderately competitive landscape. So its possible it will take some time for all the changes to fully filter through and start ranking you better. Have you done a crawl request on the site, are your pages all indexed, is everything redirected properly that needs to be redirected, is your robots.txt set up properly, and have you seen any growth in important metrics in analytics since the changes were made that might signal to you the changes are starting to work?
-
RE: Do you need contact details (NAP) on every page of your website for local search ranking ?
If there's no footer, why not at the top of the page. Something along the lines of "Located at the intersection of street and road in the center of Town" with a nice, obvious Click to Call?
-
RE: Do Search Engines Try To Follow Phone Number Links
While a bot might try to follow it (because it is, in it simplest form, a type of link), that will not in any way adversely affect you. That tel: in the tag will tip them off that it is a telephone number and/or should be click-to-call. So no link equity will be lost, you won't start seeing tons of 404 warnings, or anything of that sort.
-
RE: Impact of Non SEO Subdomains
It is _possible _that a subdomain-based landing page could be cannibalizing rankings for specific terms from another page on your site. But if that landing page is actually that good for the term, its not necessarily a bad thing to have it ranking. If its ranking better than the pages you optimized for organic then maybe you should look at why that is (i.e. is it getting good/better links, are people sharing it around, is it better targeted than the organic page, is it more intuitive or has a better call to action, etc. etc.).
Now, if you really don't want thoe pages to rank in organic in place of optimized pages you created then you can very easily add a NoIndex tag to the page or exclude it from being crawled in Robots.txt
-
RE: Menu Structure
You could always test the link to see if it is really being used from the secondary navigation more so (or at all) than the main navigation link. Create a parameter and track it over a few months in analytics. That way you don't over-optimize in the interim but 3 months from now (or less, or more, really that's up to you) you can definitively say whether it is better to remove it or if leaving it alone was the correct move.
-
RE: Menu Structure
I can't really visualize that well how the links are laid out from your description. Its been a long day, so that might be it.
So it really depends on how it links back to those pages. e.g. If they're site navigation breadcrumbs then I don't see the problem as it potentially establishes relevancy for the topic and facilitates movement through the site for both users and bots. If its just an internal link for the sake of a link in the body of the page, maybe not so much. But if its a completely relevant link and you see in analytics that people who enter on the one page are regularly going to the other, and vice versa, then obviously it is of use to the customer/visitor. If its an issue of pages being link heavy and you're worried that its diluting link equity or creating a user experience issue and you want to clean up the page and/or make it easier/more intuitive to use, then a heat map tool like Crazy Egg might be useful for helping you determine which links to keep and which are flak.
-
RE: When is Too Many Categories Too Many on a eCommerce site?
Are the categories helpful for the customer? On one hand you don't want to lump too many things into one category when they can be broken out into more granular categories that better serve visitors. On the other hand, it won't help you or your customers if you get too granular and break everything out into categories based on the mot insignificant details.
While keyword cannibalization is a concern, serving your visitors/customers what they want and how they prefer to see it will likely improve metrics more on your site than concerning yourself with a nebulous concept like "how many categories is too many." If you have 200 different categories but they are well targeted and you want to add another (or ten more) that are also equally well targeted, then why wouldn't you do it?
-
RE: Viewing search results for 'We possibly have internal links that link to 404 pages. What is the most efficient way to check our sites internal links?
Do you have Google Webmaster Tools/Search Console set up for you site? They'll let you know through that tool when Google notices a 404 on your site. Alternatively you could download a tool like Screaming Frog and run a crawl of your own site to see what 404s it finds.
-
RE: We 410'ed URLs to decrease URLs submitted and increase crawl rate, but dynamically generated sub URLs from pagination are showing as 404s. Should we 410 these sub URLs?
You could but its not completely necessary to go through all those sub-pages to 410 them. While a 410 Gone response is a stronger signal, those pages serving 404s will eventually be removed from the crawl and/or SERPs by the bots anyway. So if those pages are just dynamically-generated flak, and don't provide anything of benefit, then leave them as 404s and don't worry about it.
-
RE: Please let me know if I am in a right direction with fixing rel="canonical" issue?
As Logan said, you'd be better served handling these with 301 redirects. But you will also want to go in Google Search Console/Webmaster tools into Site Settings and set your preferred domain to either WWW on Non-WWW (depending on which you prefer to show across your site).
-
RE: 404 broken URLs coming up in Google
Agreed. Go to Search Console, see what 404 errors Google is throwing your way, 301 redirect anything that can & should be redirected from the list to their most relevant equivalent on the live site, and then fetch & submit the site for a recrawl.
OR (since the links in question you posted was for a Test Site) if that test version needs to be up for internal testing purposes then you can potentially NoIndex the pages, resubmit for crawl so the bots see the NoIndex on the pages, and then after they've dropped out of the SERPs you can update your robots.txt to disallow the folder those pages are sitting on. (Not sure if there's a better/quicker way to get them out of the SERPs if you still need it Live)
-
RE: Are these Search Console crawl errors a major concern to new client site?
The Soft 404s are probably because the archive and/or tag pages that they are crawling are predominantly empty and look like a 404'd page that is returning a 200. If google is already indexing the actual articles/blog posts then you can most likely safely NoIndex the archive pages and tag pages. Many of those pages exist for the visitor but wind up creating other problems like duplicate content issues, soft 404, and so on.
Anything that is a legitimate 404 but is still coming up as a Soft 404, you should make sure your backend is serving the 404 response code properly or not as there may be an issue there. Other legitimate 404s that are serving the proper 404 reponse (not soft 404) are fine and can be marked fixed.
For those "Not Found" previous listing, you need to determine what (if anything) should be 301'd to an existing page so as to not lose link equity and then determine what is gone forever to serve a 410 response on (or leave them as 404s and they'll drop off eventually).
-
RE: Google Indexing Desktop & Mobile Versions
You can easily restrict portions with robots.txt depending on how exactly your site is set up. So for instance, something like:
Desktop site: http://www.domain.com/robots.txt
User-agent: Googlebot
Allow: /User-agent: Googlebot-Mobile
Disallow: /Mobile site:http://m.domain.com/robots.txt
User-agent: Googlebot
Disallow: /User-agent: Googlebot-Mobile
Allow: /Or
User-agent: *
Allow: /
User-agent: Googlebot
Disallow: /mobile/
Allow: /
User-agent: bingbot
Disallow: /mobile/
Allow: /
User-agent: Googlebot-Mobile
Disallow: /
Allow: /mobile/
User-agent: bingbot-mobile
Disallow: /
-
RE: Canoncial tag for Similar Product Descriptions on Woocommerce
I agree with Laura on this one. If the content of each page is 99% the same as each other (and/or 99% the same as what all your competitors are doing) then you're not going to rank and be found for these products; especially if there is an older, more established brand in your industry. Your best option is really to fill out those pages with more unique content. It can be daunting but you can get them to rank and be found with just a little bit of work. (Trust me on this, I used to work for an ecommerce that had a few hundred products [each with 7-12 micro-variations] that were legitimately the same thing as each other but with a slight color or texture difference at best... you'd be amazed how many ways there are to sell the same thing without duplicating copy.)
Throw together a landscape report, get an idea of all the various core terms in your industry, lay out a plan for what pages will use what term(s) and how, and if you don't have an in-house content writer it wouldn't hurt to look into hiring one (even part time) to get 89+ pages banged out for your site.
-
RE: 4000 new duplicate products on our ecommerce site, potential impact?
Is the 4000 product site still going to exist or is it being stripped and moved to the 9500 product site? If everything is getting completely moved from one site to the other then you really do need to find out who has access to canonicals or 301 redirects so you can move the sites properly. If the smaller site is staying up and selling those products still, realize you'd be canibalizing your own traffic potentially and could wind up with shoddy rankings from all the scraped/dupe content.
Since you have no access to Canonical/NoIndex/Robots/etc. the question is, what do you have access to? Do you need to move all these products over? Are they exact duplicate of things you have on your site already? If its an exact duplicate of something you offer then you probably shouldn't add a duplicate page but you should canonical or 301 if you were able to. If they're close but have slight differences then you might be better served by adding a new product option to the existing page for the similar product in order to better serve the consumer, instead of diluting rankings with something so similar. Though you till might need that canonical or redirect to ensure everything is targeted properly.
-
RE: My title has a TM symbol and Moz says I don't have the keyword in my title
Its likely because its attempting to find an exact match so if you have Moz tracking [Pelican] it won't see [Pelican] as the same thing.
-
RE: Webmaster tools Hentry showing pages that don't exist
Without more information or a site to look through, I did a cursory search of Hentry issues that could be the cause of your problem and the potential fixes.
https://www.acceleratormarketing.com/trench-report/google-analytics-errors-and-structured-data/
-
RE: Moz Point Swag
I figured as much, which is fine. I can wait until the day I get the box. It would just be great to walk in and put down that Roger figurine and get my boss all jealous.
-
Moz Point Swag
So, I've been away from the SEO world for a few years but now I'm back in full swing and I noticed I have enough points for "A special MozPoints t-shirt and a Roger vinyl figurine" but had never gotten those because I assume it must have become an added swag during my hiatus... but I'd absolutely love to get hold of that as my boss is a big Roger fanboy and it would be hilarious to have that on my desk as a friendly mocking.
If I can't, I can't. But I figured it couldn't hurt to ask.
-
RE: 301 Redirect Question
I ran a crawl on screaming frog as well. I don't see a problem with the 301s. They mostly seem to be pointing the non-www page to the www version... assuming you want the WWW version ranking over the non, then everything is fine. As long as everything is pointing to the correct version of the page then you shouldn't have any issues.
-
RE: Duplicate content across a number of websites.
The problem, as stated by Logan and Don, is that if each of the 25 different locations are too similar then none of those are going to do well in the SERPs. You need to determine how much of each site is going to be too similar and/or duplicate content and consolidate that. One way to do that, as stated by Don, is a single site with local options.
Some achieve this by using geolocation or entering in postal codes & either choosing their local store or having site parameters alter product availability. The content is then restricted by the offerings at the visitor's local store instead of showing all available options from the overarching corporation. So the product pages still exist and are crawlable but some color options may be grayed out where they aren't available or "Out of Stock" warnings will appear where applicable.
One other option i've seen is using differing subdomains to offer up the same basic idea as geolocation/postal code but could help with local organic search. e.g. NewYork.Webstore.xyz vs. London.Webstore.xyz This would allow each location to essentially have its own mini-site that is on the company's main site (like a halfway point between one big single site and 25 duplicate content sites). Now with the single site altered by location data, you only need one version of a product page but you would need to write up some great localized landing pages for each individual store. For the subdomain idea, you'll want to canonicalize all the duplicates to a main version... so the page for NewYork.Webstore.xyz/ProductA/ and London.Webstore.xyz/ProductA/ would have rel="canonical" pointing at your main site's page Webstore.xyz/ProductA/ so authority is passed to the root domain and you don't get penalized for duplicate content.
-
RE: Recommendations for the length of h1 tags and how much does it matter. What is the major disadvantage if the h1 tags are slightly longer.
From my understanding, there is technically no limit to the length of an H1 tag. Rule of thumb for me was always to keep it short and to the point. You don't want to water down any relevancy gained from the h1 by shoving too much into it.
-
RE: Weird 404 URL Problem - domain name being placed at end of urls
I had this problem in Wordpress about a year ago. In my case it was caused by links being entered into posts getting turned into relative links instead of being absolute links. Somehow this was causing the links to append the domain name to the end of the url. In our case it turned out to be an incompatibility between plugins. Have you tested all your plugins to see if any of them are interfering and causing this issue?
-
RE: Facebook Reach on Post Just Spiked!
Edgerank doesn't exist anymore. Its much more complex now and without a catchy name. (I still catch myself calling it Edgerank when trying to explain Facebook feeds to people though) http://marketingland.com/edgerank-is-dead-facebooks-news-feed-algorithm-now-has-close-to-100k-weight-factors-55908
Check the post metrics in your Facebook insights for a deeper understanding of that post. Might help you glean more ideas as to what specifically was different about this post and maybe it can be replicated in future posts. Much of Facebook is trying to determine when to post those things that should be seen right away, when to post the interesting stuff that can sit all day, when to tag who & how, varying of images & links & shares, and determining what your community appreciates so you can deliver more of that to them.
-
RE: When does Moz update campaign data with new timeframe ?
Should be every week on or around the day of the week that the campaign was created, if I'm not mistaken. I get my updates every Friday afternoon except for one campaign that updates on Tuesdays.
Edit: And because I forgot that function was added, when you're in your account if there's a tab on the right side of the screen that says FAQ... you can click that and then click a whole bunch of different things to get those answers directly. Trying it out now when I click on the week listing it pops up "Q: How often will my site be crawled? A: We crawl your site once a week, usually on the same day each week. For example, if you set up a campaign on Monday, the first crawl will be done on Tuesday and all of your subsequent crawls should complete on Tuesday going forward."
-
RE: Is it worth pursing PR and guest posting just for links?
Pursuing any avenue just for links is the wrong way to go about it. Press Release links have gotten hit bad lately because of their misuse and overuse... but if you have some news on your business that is actually PR worthy then shopping it around to respectable sites and getting your news out there to the right people can increase your qualified traffic. Same goes for guest posting. Its been hit bad lately but its not about the links per se. Getting your name out there, branding, sharing useful information or something humorous or poignant can help people learn who you are, increase your qualified traffic, etc. etc. and you don't need a followed link to reap the benefits. It also wouldn't hurt to look into social for branding and community purposes. And a product/tool/widget/infographic can also be a great way to gain links and/or spread via word of mouth/social mentions. But be sure not to go embedding any hidden links in sharable widgets or you'll get bitch slapped by Google as well. As Andy put it, creativity is the key here. There are so many ways of earning links, getting shared and being seen online that there practically is no limit to what is possible.
-
RE: Number of indexed pages dropped dramatically
It is unlikely that having links from MyBlogGuest would cause a drop in indexed pages like that. Where are you seeing this drop in indexed pages? Is it being reported in Moz or Google Webmaster tools? Also, do you have Google Analytics set up for your site to check other metrics? A large drop in indexed pages does not necessarily mean something wrong (canonical tags, cleaning up duplicate content, reporting errors, noindex tags, etc. can all cause a drop in indexed pages).
-
RE: Number of indexed pages dropped dramatically
Have you seen any corresponding drops in traffic? Have you made any recent changes? Redirects, canonicals, site remodel, link restructuring, changed hosting, updated your CMS, etc. etc. It'll be a bit unlikely someone will come up with the correct reason without more information.
-
RE: Matt Cutts says 404 unavailable products on the 'average' ecommerce site.
Personally I prefer leaving the unavailable products (ones that will never come back) up & accessible for a set amount of time, placing a notice & link on the page to the most relevant available product or related category page, placing a canonical on the unavailable product page to that related product/category page and then after a few months redirecting the unavailable product to the related page.
-
RE: How come www.ifundinternational.com beat us despite that most links seems VERY shady?
If they are, in fact, breaking guidelines that Google should be penalizing them for then there is always the spam reporting tool. https://www.google.com/webmasters/tools/spamreport (you need to log in/be logged in to webmaster tools in order to use it).
-
RE: Duplicate description error: one for meta one for og:type
I don't know if its common practice but its not a new fix. Never heard of there being any issues with doing it that way (at least no one has ever told me this suggestion threw other errors for them if it has).
-
RE: Duplicate description error: one for meta one for og:type
Have you tried combining them into one? e.g.
name="description" property="og:description" content="My meta description copy."/>