Thank you Tom!
Posts made by odmsoft
-
RE: WEBMASTER console: increase in the number of URLs we were blocked from crawling due to authorization permission errors.
Hello Thomas,
I really appreciate your help! You said i can look at your site's structure. What is your site address?
Unfortunately, i still don't know what i need to do in order to remove those pharma hack from my site. If you know where to point me to get the answer, i'll be very grateful.
Also, what tool you used to generate this report http://crawl.blueprintmarketing.com/projects/reports/215533?ro=75ad0c6e4afacc428b553d449dfd281f82ec2ad6 ?
Also, what tool you used to create XML site map?
Thanks
-
RE: WEBMASTER console: increase in the number of URLs we were blocked from crawling due to authorization permission errors.
Also, do you say that the mobile site is blocked? Also, how do you see that the site doesn't have XML? What tool shows you all this info?
Thanks
-
RE: WEBMASTER console: increase in the number of URLs we were blocked from crawling due to authorization permission errors.
Hi Thomas,
I really appreciate your help! Can you advise me what i should do? I see all these reports but i don't know how i need to clean the site.
Thank you!
-
SSL redirect issue
Hi guys,
I have a site that has some internal pages with SSL. Recently i noticed that if i put https://mydomain.com, this URL is accessible but all the design is messed up. My site is on wordpress and i use "redirection" plugin for all the 301 redirect. So i decided to add a new 301 redirect from https://mydomain.com to my actual URL version of home page http://mydomain.com. After doing that, my home page doesn't load at all. Does anybody know what happens?
Thank you for advice!
-
Subdomain question
Hi guys,
I have a subdomain on my site that i want to completely remove from the index. I tried already everything to remove it but it is special situation so the only choice i have left is to remove it from Search Console in "Remove URLs" feature.
So my question is: if i remove my root subdomain (example: http://subdomain.mydomain.com/) via "Remove URLs" feature in Webmaster Console, will it remove all the URLs coming from that particular domain as well?
I also want to make sure that my root domain will stay untouched and be functioning normally.
Thank you for advice!
-
RE: WEBMASTER console: increase in the number of URLs we were blocked from crawling due to authorization permission errors.
Thank you Thomas,
My site is clean though according to sucuri. I spoke to owner of this website and they said that they were hacked in the past and they blocked those pages themselves. So now google detects those pages again? Or what exactly is happening? Anybody knows?
Thanks
-
RE: WEBMASTER console: increase in the number of URLs we were blocked from crawling due to authorization permission errors.
Hi Dirk,
In webmaster tools if i click one by one those links, i can see "Linked from" URLs. There are URLs like this:
http://schwagginwagon.com/?page_name=Buying+Tadalis+SX+Safely+No+Prescription+Tadalis+SX&id=1810
and also there is one URL is coming from my domain. Not sure what it means.
I went through every single URL in Google index but all of them are normal URLs. Nothing related to spam. Any ideas?
Thanks
-
RE: WEBMASTER console: increase in the number of URLs we were blocked from crawling due to authorization permission errors.
Hello Dirk,
Thank you for fast reply! I thought it too right away. So all of these URLs are forbidden when i try to access them. This is the message from google webmaster tools "Googlebot couldn't crawl your URL because your server either requires authentication to access the page, or it is blocking Googlebot from accessing your site."
Any ideas? Thanks
-
WEBMASTER console: increase in the number of URLs we were blocked from crawling due to authorization permission errors.
Hi guys,I received this warning in my webmaster console: "Google detected a significant increase in the number of URLs we were blocked from crawling due to authorization permission errors." So i went to "Crawl Errors" section and i found such errors under "Access denied" status:
?page_name=Cheap+Viagra+Gold+Online&id=471
?page_name=Cheapest+Viagra+Us+Licensed+Pharmacies&id=1603
and many happy URLs like these. Does anybody know what this is and where it comes from?
Thanks in advance!
-
RE: Verification of https domain in webmaster tool
Thank you for help!
Unfortunately these tutorials not about https domains but about regular http domains.
Anyway thanks!
-
RE: Question about schema.org
Hi Miriam, thank you for helping! Yes, this is what i wanted to know if there is a way to somehow to mark local page with numbers of vendors on it (not individual pages with only one vendor).
Thanks
-
RE: Verification of https domain in webmaster tool
Hi Andy,
My bad. I mistakenly said i'm unable to verify. I actually verified it. The thing i'm unable to do is to set preferred domain. I'm receiving this error when i select preferred domain (without www): "Part of the process of setting a preferred domain is to verify that you own http://www.youthfulhome.com/. Please verify http://www.mydomain.com/."
So i assume it treats https as different domain and doesn't see that i already verified it. Aby idea how i can set the preferred domain?
Thanks
-
Verification of https domain in webmaster tool
Hi guys,
I'm having a problem to verify domain with https in webmaster tools. Does anybody know how to verify it?
Thanks
-
RE: Question about schema.org
Thank you all for helping me to figure it out!
Richard, i'm trying to assign structured data to identify the whole page itself as a list of plumbing businesses from specific area. I just want to add a location markup and potentially type of vendor (i.e. plumber, roofer..). Is that possible?
Andy, can you elaborate what you mean by "you can add schema to the listings on the page though?"
Thanks!
-
RE: Question about schema.org
Thank you for answering!
So yes, i see i haven't articulated my question clearly enough. Andy is right, i have local pages where there are several local businesses are featured on the same local page. For example, 5 plumbers in Atlanta, GA. So all of these local businesses are located in Atlanta, GA but they don't have any relation to each other.
My initial question was if i can mark such local pages (with number of local businesses from the same city on every page) with schema.org so search engines will only see that this page is about a local business (let's say plumbers) and its location (let;s say Atlanta GA).
THank you again!
-
RE: Question about schema.org
Hi guys. Thank you so much for insights! I really appreciate so rapid responses!
Tim Holmes, can you please advise what markups i should add to better determine the locality of the pages? You also said "just make sure you implement it correctly so you don't conflict with each entry." Can you please give me more details on how to implement it correctly so i don't make conflict with each entry?
Andy Drinkwater, yes i read these guidance and i totally understand them. However, as i understand this applies to rating and reviews only. What i want to do is just to mark each local page with local markups. Please let me know if i'm missing something.
Thank you in advance!
-
Question about schema.org
Hi guys,
I have a website that has many local based pages. In other words we're featuring local businesses in many many cities. So my question is, will it help if i add schema markup to each page while each markup will be appropriate to the city each page belongs to? Will it help with ranking those local pages?
Thanks
-
RE: Blocking Subdomain from Google Crawl and Index
Hello George, Thank you for fast answer! I read that article and there is some issue with that. if you can see at it, i'd really appreciate it. So the problem is that if i do it directly from Tumblr, it will also block it from Tumblr users. Here is the note right below that option "Allow this blog to appear in search results":
"This applies to searches on Tumblr as well as external search engines, like Google or Yahoo."Also, if i do it from GWT, i'm very concerned to remove URLs with my subdomain because i afraid it will remove all my domain. For example, my domain is abc.com and the Tumblr blog is setup on tumblr.abc.com. So i afraid if i remove tumblr.abc.com from index, it will also remove my abc.com. Please let me know what you think.
Thank you!
-
RE: Blocking Subdomain from Google Crawl and Index
Hi guys, I read your conversation. I have similar issue but my situation is slightly different. I'll really appreciate if you can help with this. So i have also a subdomain that i don't want to be indexed by Google. However, that subdomain is not in my control. I mean, i created subdomain on my hosting but it is pointing to my Tumblr blog. So i don't have access to its robot txt. So can anybody advise what can i do in this situation to noindex that subdomain?
Thanks
-
RE: Question about multiple versions of home page
Hi guys, I appreciate your responses! ThompsonPaul, can you show me an example of correct code of how to make such a redirect? Thank you!
-
RE: Issue with duplicate content
Peter, i'm trying to PM you but i have no idea what to place in the "recepient" field. Thank you for assistance.
-
RE: Issue with duplicate content
Hello Peter, thank you for helping!
Peter, why do you say that neither Moz nor Webmaster Tools are going to detect the duplicates between your subdomain and Tumblr? MOZ is detecting it now. Can you more elaborate on it?
THanks
-
RE: Question about multiple versions of home page
Hi Patric,
Thanks for answer! So it is just regular 301 redirect? I'm asking because once i redirected /index.php to preferred home page version (www.mydomain.com) of another site and that site went down. I'm not sure if i did it wrong (i don't think so coz it is a simple thing) but i affraid it will happen here as well. Do you have any input here?
Thanks
-
Question about multiple versions of home page
Hi guys,
I'm having a question, answer to which i'm unable to find anywhere. I browesed the whole internet lol. So the question is about multipple versions of the home page. In particular, i want to know how should i deal with home page URLs with this extention: /index.html
All the rest possible versions of home page i know how to deal with but this one "/index.html" i don't. I did add a cononical tag to it but i'm wondering whether or not i should add 301 redirect to chosen version og the home page (let's say it is www.mydomain.com). Please advise the best practices on how to deal with this.
Thanks in advance!
-
RE: Issue with duplicate content
Hello Kane,
Thank you for trying to help me!
I added a link to three screenshots. Two of them are from my MOZ account showing exponential increase of duplicate content and the second one is the subdomain where that duplicate content is coming from. The third screenshot is from my gmail account showing notification from GWT about deep links issue. I'm not sure whether these two issues have anything in common but i fell that they do. Please let me know what you think.
Thanks
-
RE: Issue with duplicate content
Thank you for help!
Answering your questions:-
My subdomain look like this: photos.domain.com and it was poined to Tumblr platform (our blog on Tumblr) because it is very image-fiendly platform as well as they host all the images.
-
We use this subdomain only for images posting. We don't use this content on our root domain at all.
I'm really confused what Android app they are talking about. Do the consider Tumblr as Android app?
Thanks
-
-
RE: Issue with duplicate content
Thank you for replies!
I'm fairly well aware about duplicate content issue but i have never faced such particular issue. As Lesley said i don't have access to head sections of each post because those posts are practically not on my property but on Tumblr's. And i have no idea how it is created. I assume that it is cause by Tumblr's feature that allows users to re-blog my blog posts.
Moreover, i've just received a warning from Google Webmaster Tools specifically pertaining this subdomain. I'm really confused. Please help
Fix app deep links to ....com/ that dont match the content of the web pages
Dear webmaster
When indexing the deep links to your app, we detected that the content of 1 app pages doesnt match the content of the corresponding web page. This is a bad experience for your users because they wont find what they were looking for on your app page. We wont show deep links for these app pages in our smartphone search results. This is an important issue that needs your immediate attention.
Take these actions to fix this issue:
- Check the Android Apps section of the Crawl Errors report in Webmaster Tools to find examples of app URIs whose content doesnt match their corresponding web page.
- Use these examples to debug the issue:
- Open the corresponding web page to have it ready.
- Use Android debug bridge to open the app page.
- Make sure the content on both your web page and your app page is the same.
- If necessary, change the content on your app (or change your sitemap / or rel=alternate element associations to make sure the each app page is connected to the right web page).
- If necessary, change your robots.txt file to allow crawling of relevant resources. This mismatch might also be due to the fact that some of the resources associated with the app page are disallowed from crawling though robots.txt.
-
RE: Issue with duplicate content
Hello Lesley,
Thank you for response! Well, the subdomain is pointing to our Tumblr blog. I have access to both main domain and Tumblr. Where should i add canonical?Thanks
-
Issue with duplicate content
Hello guys, i have a question about duplicate content. Recently I noticed that MOZ's system reports a lot of duplicate content on one of my sites. I'm a little confused what i should do with that because this content is created automatically. All the duplicate content comes from subdomain of my site where we actually share cool images with people. This subdomain is actually pointing to our Tumblr blog where people re-blog our posts and images a lot.
I'm really confused how all this duplicate content is created and what i should do to prevent it. Please tell me whether i need to "noindex", "nofollow" that subdomain or you can suggest something better to resolve that issue.
Thank you!