How do I authenticate a script with Search Console API to pull data
-
In regards to this article, https://moz.com/blog/how-to-get-search-console-data-api-python
I've gotten all the way to the part where I need to authenticate the script to run. I give access to GSC and the local host code comes up. In the article, it says to grab the portion between = and #, but that doesnt seem to be the case anymore. This is what comes up in the browser
When I put portions of it in, it always comes back with an error.
Help!
-
Hi Jo. So I think that you want everything after code= and before the &.
In the example you pasted, that would be:
4/igAqIfNQFWkpKyK6c0im0Eop9soZiztnftEcorzcr3vOnad6iyhdo3DnDT1-3YFtvoG3BgHko4n1adndpLqjXEE
If that doesn't work (or rather, it doesn't work when you re-run it and use whatever value comes up next time), let us know and I'll pull in someone who has done this themselves (I'm just reading the same instructions!).
Good luck
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
SEO + Structured Data for Metered Paywall
I have a site that will have 90% of the content behind a metered paywall. So all content is accessible in a metered way. All users who aren't logged in will have access to 3 articles (of any kind) in a 30 day period. If they try to access more in a 30 day period they will hit a paywall. I was reading this article here on how to handle structured data with Google for content behind a paywall: https://www.searchenginejournal.com/paywalls-seo-strategy/311359/However, the content is not ALWAYS behind a paywall, since it is metered. So if a new user comes to the site, they can see the article (regardless of what it is). Is there a different way to handle content that will be SOMETIMES behind a paywall bc of a metered strategy? Theoretically I want 100% of the content indexed and accessible in SERPs, it will just be accessible depending on the user's history (cookies) with the site. I hope that makes sense.
Technical SEO | | triveraseo0 -
How can I make sure a desktoppage is shown in the (desktop) search results instead of the mobile page?
When I search for my brandname, the mobile version of the customer support page is shown in the (desktop) results. We use a m.example.nl mobile webpage. To try to solve the problem, we’ve adjusted the following: Made sure the homepage is marked according to schema.org Homepage expanded with textual content and headings containing our brandname Removed all the textual content from the mobile customer support page Added the mobile customer support page to the mobile sitemap What can we change more in settings/marking/sitemap, to make sure our desktop homepage is shown in the brandname results?
Technical SEO | | WillieBV0 -
Would a Search Engine treat a sitemap hosted in the cloud in the same way as if it was simply on /sitemap.htm?
Mainly to allow updates without the need for publishing - would Google interpret any differently? Thanks
Technical SEO | | RichCMF0 -
Search Console - Should I request to index redirected URL or Mark as fixed?
Hi all, Many blog posts used to be showing 404s when doing crawl tests and in search console (despite being there when visited.) I realized it was an issue with URL structure. It used to be example.com/post-name I've fixed the issue by changing the URL structure in Wordpress so that they now follow the structure of example.com/post-type/post-name According to sitemaps, Google has now indexed all posts in /post-type/post-name. My question is what to do with crawl errors in Search Console that are still there for example.com/postname. When I fetch, I get a redirect status (which is accurate). At this point should I request to index or mark as fixed? Thank you!
Technical SEO | | MouthyPR0 -
Structured data and Google+ Local business page are conflicting
Hi, A few (almost 8 now) months ago we have added structured data to our website. which according to the testing tool should work. (Our url: https://www.rezdy.com) However when searching for our company name, our old local business page from Google+ shows up. I have reached out to google to tell them that we aren't a local business anymore and want the data from the page to be removed. But this all takes painfully long. I want my search result to be shown like the large businesses (examples: Adroll, Hubspot), including logo, twitter feed etc. etc. Will this all work, if so, is there a way to speed up the process, any suggestions?
Technical SEO | | Niek_Dekker1 -
Link removal from search rank checking sites
I'm going through the link removal process for unnatural links to a site. While I'm able to identify the obvious link profile and seo-article links that Google wants removed, what should we do about the links that are generated by the various seo link investigation and ranking services? Example: http://www.seoprofiler.com/analyze/allamericanfencing.com This site (seoprofiler) automatically creates these links to web sites when it generates its reports. Are those links that need to be removed or disavowed, or will Google not care? I want to err on the side of caution, but don't know how to treat these types of pages. The site didn't ask for or lobby for those links, so it's "natural" in that sense, but they're not editorially earned either (except for happen to be ranking for a similar term). Does anyone have experience on this aspect of the unnatural link grooming process?
Technical SEO | | CHarkins0 -
Duplicate Content via a product feed & data
We have uniquely created all of our product content on our website (Titles, product descriptions, images etc). However, we are also a manufacturer of these products and supply to a number of trade customers. These customers often wish to setup their own websites to re-sell these products. In the past we have quite happily given this content in order to assist our customers sell on their sites. Generally we give them a 'data dump' of our web data and images, but reading about duplicate content this will lead to the search engines seeing lots of identical content on these customer sites. Whilst we wish to support our customers we do not want to harm our (and their) site by issuing lots of duplicate content around the web. Is there a way we can help them with the data without penalizing ourselves? The other issue is that we also take this data feed and use it to sell on both Amazon & Googlebase. Will using this identical data also rank as duplicate content as a quick search does show both our website and amazon product page? When creating Amazon listing do these need to vary from the standard website descriptions? Thanks
Technical SEO | | bwfc770 -
Google search result going to a page that I did not put on my site
Hi, I am seeing a very strange result in google for my site. When doing a search for the term "london reflexology" my site comes up 18th in the results. But when I click the link or check the URL it shows up as: http://www.reflexologyonline.co.uk/reflexologyonline.php?Action=Webring This is not right at all. It looks like some sort of cloaking but I am not sure. I am new to SEO and I do not know why goole is showing this URL that does not exist on my site and of witch the content is totally wrong. Can anyone please help with this? See the 2 linked images for more details. It seems to me the site might be hacked or something to that effect. Please help.... jyJdP.png 71Mf4.png
Technical SEO | | RupDog0