Hoping I can do this, but am not 100% if we can. Thanks Chris!
Posts made by Jen_Floyd
-
RE: How much do I have to differentiate syndicated content, exactly?
-
How much do I have to differentiate syndicated content, exactly?
We have about 15-20 articles we'll repurpose on a partner domain (think: media outlet). To avoid duplicate content suspicion, how much exactly do we need to differentiate the content on the second domain? Yea, this is assuming we can't obtain a canonical for whatever reason.
I've found some good advice here, but am looking for some quantification. Like: "A sentence/paragraph of introduction at the top of the piece, plus a link back to the original at the end of said introduction ought to do it."
Any help is appreciated. Thanks! Tim
-
When will true multi-user log-in be available in Moz Pro? How about an archive?
Our team is growing and we are running into each others' research on the tool! Will our users ever be able to use their own instance of Moz?
Also, how about a historical archiving function for research? Would be nice to record past research as similar requests come to us in the future.
Wish list stuff - thx.
-
RE: JavaScript Issue? Google not indexing a microsite
I need to check with the site owner that we aren't blocking javascript files indexing with .robots, just read something about it here. But I don't think so.
My Art of SEO book is pretty clear about javascript creating indexing issues: "Links in hard to parse JavaScript – If you use JavaScript for link, you may find that search engine either do not crawl or give very little weight to the embedded links. Links embedded inside Java and plus-ins are invisible to search engines."
I just saw a March 2014 video with Matt Cutts saying Google's gotten better at this. But I suspect that since there are no simple links on the page to this microsite, this is why it's not indexing. NOT because we haven't submitted the URL on a sitemap or Fetch operation.
We have another microsite that's newer and simply linked, and doing just fine - fully indexed.
Thanks for your help....
-
RE: JavaScript Issue? Google not indexing a microsite
can't disclose, sorry
-
RE: JavaScript Issue? Google not indexing a microsite
very large...>100,000 pages. we are frequently crawled...91 DA, big business.
-
JavaScript Issue? Google not indexing a microsite
We have a microsite that was created on our domain but is not linked to from ANYwhere EXCEPT within some Javascript elements on pages on our site. The link is in one JQuery slide panel.
The microsite is not being indexed at all - when i do site:(microsite name) on Google, it doesn't return anything. I think it's because the link's only in a Java element, but my client assures me that if I submit to Google for crawling the problem will be solved.
Maybe so, but my point is that if you just create a simple HTML link from at least one of our site pages, it will get indexed no problem. The microsite has been up for months and it's still not being indexed - another newer microsite that's been up for a few weeks and has simple links to it from our pages is indexing fine.
I have submitted the URL for crawling but had to use the google.com/webmasters/tools/submit-url/ method as I don't have access to the top level domain WMT account.
p.s. when we put the microsite URL into the SEOBook spider-test tool it returns lots of lovely information - but that just tells me the page is findable, does exist, right? That doesn't mean Google's going to necessarily index it, as I am surmising...Moz hasn't found in the 5 months the microsite has been up and running. What's going on here?
-
RE: Do you say Browser Title or Page Title?
Awesome - thanks RaymondPP and Who Wudda Thunk...
Now I get to say nana nana booboo to some of the "browser title" stalwarts. ;>
-
Do you say Browser Title or Page Title?
I have seen much more use of "Page Title" of late....in fact hardly any of "Browser Title" except by some folks who are using a very old CMS.
We need to go with one at my workplace to avoid confusion. My vote is Page Title. Thoughts?
Thanks, Tim
-
RE: Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
Update - Google has crawled this correctly and is returning the correct, redirected page. Meaning, it seems to have understood that we don't want any of the parametered versions indexed ("return representative link") from our original page and all of its campaign-tracked brethren, and is then redirecting from the representative link correctly.
And finally there was peace in the universe...for now. ;> Tim
-
RE: Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
Agree...it feels like leaving a bit to chance, but I'll keep an eye on it over the next few weeks to see what comes of it. We seem to be re-indexed every couple of days, so maybe I can test it out Monday.
BTW, this issue really came up when we were creating a server side 301 redirect for the root URL, and then I got to wondering if we'd need to set up an irule for all parameters. Hopefully not...hopefully Google will figure it out for us.
Thanks Peter. Tim
-
RE: Does Google pass link juice a page receives if the URL parameter specifies content and has the Crawl setting in Webmaster Tools set to NO?
This question deals with dynamically created pages, it seems, and Google seems to recommend NOT choosing the "no" option in WMT - choose "yes" when you edit the parameter settings for this and you'll see an option for your case, I think, Christian (I know this is 3 years late, but still).
BUT I have a situation where we use SiteCatalyst to create numerous tracking codes as parameters to a URL. Since there is not a new page being created, we are following Google's advice to select "no" - apparently will:
"group the duplicate URLs into one cluster and select what we think is the "best" URL to represent the cluster in search results. We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL."
What worries me is that a) the "root" URL will not be returned, somehow (perhaps due to freakish amount of inbound linking to one of our parametered URLs), and b) the root URL will not be getting the juice. The reason we got suspicious about this problem in the first place was that Google was returning one of our parametered URLs (PA=45) instead of the "root" URL (PA=58).
This may be an anomaly that will be sorted out now that we changed the parameter setting from "Let Google Decide" to "No, page does not change" i.e. return the "Representative" link, but would love your thoughts - esp on the juice passage.
Tim
-
RE: Any SEO value in gTLD redirect?
Awesome response Mike, and you're right on the trail of some other thoughts that are buzzing around the team right now.
Thanks so much - Tim
-
Any SEO value in gTLD redirect?
So, my client is thinking of purchasing several gTLDs with second level keywords important to us. Stuff like this...we don't want .popsicles, just the domain with the second level keyword. Those cost anywhere from $20-30 right now:
- grape.popsicles
- cherry.popsicles
- rocket.popsicles
- companyname.popsicles
The thinking is that it's best to be defensive, not let a competitor get the gTLD with our name in it (agreed) and not let them capitalize on a keyword-rich gTLD (hmm). The theory was that we or a competitor could buy this gTLD and redirect it to our relevant page for, say, cherry popsicles. They wonder if that would help that gTLD page rank well - and sort of work in lieu of AdWords for pages that are not ranking well.
I don't think this will work. A redirected page shouldn't rank better that the page it links to...unless Google gave it points for Exact Match in the URL. Do you think they will -- does Google grade any part of a URL that redirects?
Viewing this video from Matt Cutts, I surmise that a gTLD would be ranked like any other page -- if its content, inbound links, etc. support a high DA, well, ok then, you get graded like every domain. In the case of a redirect, the page would not be indexed as a standalone so that is a moot point, right?
So, any competitor buying a gTLD with the hopes of ranking well against us would have to build up pagerank in that new domain...and for our purposes I see that being hugely difficult for anyone - even us. Still, a defensive purchase of some of these might not be a bad idea since it's a fairly low cost investment.
Other thoughts?
-
RE: Purchase second-level gTLDs?
Here's some info I found on second and third-level domains. ICANN does call third-level domains subdomains, and that's where I got confused before.
http://newgtlds.icann.org/en/applicants/customer-service/faqs/faqs-en
-
RE: Purchase second-level gTLDs?
Great answer, thanks - yea, I'm late to the dance on these, so thanks for the terminology tip too. Dealing with a huge CMS migration...and this came across my desk today, ugh.
So, when you said "thin content emds are" an issue...did you mean that the main issue with EMDs is when you don't have the content to back it up? That it could be a liability?
Not that I would ever deal with that. Just curious. Thanks.
-
Purchase second-level gTLDs?
So, I've been asked if it makes SEO sense for our company to grab a bunch of second-level gTLD (which we were earlier calling gTLD subdomains incorrectly) so that we can capitalize on redirecting them to our relevant pages that might not be ranking as well (if Google treats them like EMDs).
For instance, buy something analogous to red.shoes, blue.shoes, purple.shoes and so on and then redirect them to our relevant pages for that product. Someone owns the .shoes domain but is happy to sell us second-level domains like red.shoes for $20-30.
The question is, if we scoop up 100 or so of these relevant to our product, will it matter? I guess it depends on how Google is going to treat these. Anyone know?
-
RE: Need a keyword tool for the whole company
Awesome - this would be great. While quite as user friendly as the Moz KW tool, it's a great option - thanks for clearing this up. Makes sense that it would be free, but a lot of things that make sense are not so.
BTW I talked to someone at Moz who said that they are working on a solution for enterprise-use of their tools -- multiple sign-ons or dashboards or such. Stay tuned, they said.
Thanks Vahe and EEE3
-
RE: Need a keyword tool for the whole company
Thanks so much for this thoughtful reply - I'm checking out the moz blog links you offered.
As for the Google Keyword Planner, it is my understanding that that only exists within Adwords, and again this is not something I could really share access to 20+ people working on other teams. I could barely get access to our adwords account myself from our PPC manager.
Is there another way to access this, or a different version of the tool Google offers as a standalone? Yea, I wish so wish Google Trends was the answer but it ain't.
Thx
-
Need a keyword tool for the whole company
I alone cannot do keyword research on all of the online content that my company produces. There are about 10 publishing support teams who could do this research themselves, once trained, but I don't know which tool to suggest they use since they can't all use my MOZ login.
Right now, those who do any research at all are using Google Trends. Wrong answer, but of course they are used to Trends for their social campaigns.
Has your company dealt with this situation? I've looked at a few free keyword tools...each seems to have its plusses and minusses. **What would you recommend...either as a free tool or possible other workaround? **
-
RE: Instead of a 301, my client uses a 302 to custom 404
Travis, thanks - in addition to my comment to Wiqas, I think that the usability is a big point to make. See, the analysts will come back to me and say, "we're not seeing a drop in any traffic to comparable pages." I'm going to do an in-depth look into Page Authority for a related report, but I agree 100% on the usability point. We do have comparable pages...why the heck wouldn't we 301? Esp. when external sites are still occasionally use the legacy URL....
Thx.
-
RE: Instead of a 301, my client uses a 302 to custom 404
Thanks - yea, and it's funny because most of the analysts and devs I talk to say, "oh, 302 is just as good as 301, these days." Everything else I read runs contrary to that. Thanks Wiqas.
-
Instead of a 301, my client uses a 302 to custom 404
I've found about 900 instances of decommissioned pages being redirected via 302 to a 404 custom page, even when there's a comparable page elsewhere on the site or on a new subdomain.
My recommendation would be to always do a 301 from the legacy page to the new page, but since they're are so many instances of this 302->404 it seems to be standard operating procedure by the dev team.
Given that at least one of these pages has links coming from 48 root domains, wouldn't it obviously be much better to 301 redirect it to pass along that equity? I don't get why the developers are doing this, and I have to build a strong case about what they're losing with this 302->404 protocol.
I'd love to hear your thoughts on WHY the dev team has settled on this solution, in addition to what suffers as a result. I think I know, but would love some more expert input.
-
RE: Any idea when the mozcon 2013 videos will be out?
Any word now? I'm desperate to share one of the videos with my boss - in part to show her the value of the conference (hint hint!). Is there even a partial list of completed videos?
-
Scribd embed links - bad idea?
My client's site in question has a TON of outstanding, constantly updated, highly detailed articles. The site owner also has a branded collection of nearly all of them on Scribd. I guess I can live with that because dupe content isn't an issue and the pdfs there link back to the site and another domain of ours. Plus it gets a lot of eyeballs on our newish brand and content, and we can run reports on users.
BUT, we have Scribd social share buttons on each article on our site that (among other things) allows a user to grab a direct link to the content on Scribd or an embed link for their blog or whatever.
So, two questions really -
- Foremost, shouldn't we get rid of that embed option on our page? I mean, isn't is stealing from our backlink potential? I can't imagine juice would somehow pass back to us through a Scribd-located doc or embed but I haven't found info affirming or contradicting that.
- And secondly, isn't a Scribd collection a bit analogous to posting videos on YouTube and hoping your page will ultimately benefit from it via clickthroughs, etc? At this year's MozCon I heard a strong argument against that.
Thanks -