Can Google crawl dynamically generated links?
-
Thanks in advance!
-
A few years ago I added a location finder which then auto generated product content for ERIKS. This generated 1000's of URLs overnight, the problem was Google thought I was spamming it's indexes. Google does follow all links on a web page/sitemap however it's what it does with them that counts. The ERIKS Hose Technology Site http://www.eriks-hose-technology.com relies on this type of coding using Classic ASP.
I have a number of other sites which also rank highly on Google You'll find Revolvo by searching for 'Split Roller Bearings'. So it's also not as bad at ranking as some people may tend to think. I would agree however that English URLs are better than coded. The main issue here is to make sure that your main keyword is in the URI to ensure that Google knows what your page is about.
-
For the most part, Google can crawl any links on the page, but I think I could do with a little bit more information about what pages are being linked to and how the links are being generated.
If you are using javascript or even (shudder!) Flash to generate the links then search engines will struggle to see them.
If it's in html then it should be fine, but you need to be careful of duplicate content. If a single page can be called up by multiple dynamically generated URLs then it can harm search ranking if rel=canonical tags have not been used.
-
Hi,
For the most part, Google can find their way well around dynamic pages with few problems. There are always going to be exceptions, but this tends to be when there are lots of parameters to the URL structure.
As long as it is pretty simple, you shouldn't have any problems.
-Andy
-
Well, it can crawl anything found on a web page. If you are referring to a page whose links are dynamically generated in the sense that you build them before serving the page (php for example), then yes. If Google bot reaches that page in any way (it is not blocked etc) then your links will be crawled as well.
Got a burning SEO question?
Subscribe to Moz Pro to gain full access to Q&A, answer questions, and ask your own.
Browse Questions
Explore more categories
-
Moz Tools
Chat with the community about the Moz tools.
-
SEO Tactics
Discuss the SEO process with fellow marketers
-
Community
Discuss industry events, jobs, and news!
-
Digital Marketing
Chat about tactics outside of SEO
-
Research & Trends
Dive into research and trends in the search industry.
-
Support
Connect on product support and feature requests.
Related Questions
-
Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
Hi all, I have been looking into this for about a month and haven't been able to figure out what is going on with this situation. We recently did a website re-design and moved from a separate mobile site to responsive. After the launch, I immediately noticed a decline in pages crawled per day and KB downloaded per day in the crawl stats. I expected the opposite to happen as I figured Google would be crawling more pages for a while to figure out the new site. There was also an increase in time spent downloading a page. This has went back down but the pages crawled has never went back up. Some notes about the re-design: URLs did not change Mobile URLs were redirected Images were moved from a subdomain (images.sitename.com) to Amazon S3 Had an immediate decline in both organic and paid traffic (roughly 20-30% for each channel) I have not been able to find any glaring issues in search console as indexation looks good, no spike in 404s, or mobile usability issues. Just wondering if anyone has an idea or insight into what caused the drop in pages crawled? Here is the robots.txt and attaching a photo of the crawl stats. User-agent: ShopWiki Disallow: / User-agent: deepcrawl Disallow: / User-agent: Speedy Disallow: / User-agent: SLI_Systems_Indexer Disallow: / User-agent: Yandex Disallow: / User-agent: MJ12bot Disallow: / User-agent: BrightEdge Crawler/1.0 (crawler@brightedge.com) Disallow: / User-agent: * Crawl-delay: 5 Disallow: /cart/ Disallow: /compare/ ```[fSAOL0](https://ibb.co/fSAOL0)
Intermediate & Advanced SEO | | BandG0 -
Client has an inexplicable jump in crawled pages being reported in Google Search Console
Recently a client of mine noticed an inexplicable jump in crawled pages being reported in Google Search Console. We researched the following culprits and found nothing: Rel=canonicals are put in place No SSL/non SSL duplication We used a tool to extrapolate search query page data from Google Search Insights; nothing unusual No dynamic pages being made on the website All necessary landing pages are in the XML sitemap Could this be a glitch in GSC? We are wondering what the heck is going on. 7eaeS
Intermediate & Advanced SEO | | BigChad20 -
Can I add external links to my sitemap?
Hi, I'm integrating with a service that adds 3rd-party images/videos (owned by them, hosted on their server) to my site. For instance, the service might have tons of pictures/videos of cars; and then when I integrate, I can show my users these pictures/videos about cars I might be selling. But I'm wondering how to build out the sitemap--I would like to include reference to these images/videos, so Google knows I'm using lots of multimedia. How's the most white-hat way to do that? Can I add external links to my sitemap pointing to these images/videos hosted on a different server, or is that frowned upon? Thanks in advance.
Intermediate & Advanced SEO | | SEOdub0 -
Can't generate a sitemap with all my pages
I am trying to generate a site map for my site nationalcurrencyvalues.com but all the tools I have tried don't get all my 70000 html pages... I have found that the one at check-domains.com crawls all my pages but when it writes the xml file most of them are gone... seemingly randomly. I have used this same site before and it worked without a problem. Can anyone help me understand why this is or point me to a utility that will map all of the pages? Kindly, Greg
Intermediate & Advanced SEO | | Banknotes0 -
What is Google supposed to return when you submit an image URL into Fetch as Google? Is a few lines of readable text followed by lots of unreadable text normal?
I am seeing something like this (Is this normal?): HTTP/1.1 200 OK
Intermediate & Advanced SEO | | Autoboof
Server: nginx
Content-Type: image/jpeg
X-Content-Type-Options: nosniff
Last-Modified: Fri, 13 Nov 2015 15:23:04 GMT
Cache-Control: max-age=1209600
Expires: Fri, 27 Nov 2015 15:23:55 GMT
X-Request-ID: v-8dd8519e-8a1a-11e5-a595-12313d18b975
X-AH-Environment: prod
Content-Length: 25505
Accept-Ranges: bytes
Date: Fri, 13 Nov 2015 15:24:11 GMT
X-Varnish: 863978362 863966195
Age: 16
Via: 1.1 varnish
Connection: keep-alive
X-Cache: HIT
X-Cache-Hits: 1 ����•JFIF••••��;CREATOR: gd-jpeg v1.0 (using IJG JPEG v80), quality = 75
��C•••••••••• •
••
••••••••• $.' ",#(7),01444'9=82<.342��C• ••••
•2!!22222222222222222222222222222222222222222222222222��•••••v••"••••••��••••••••••••••••
•���•••••••••••••}•••••••!1A••Qa•"q•2���•#B��•R��$3br�
••••%&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz���������������������������������������������������������������������������•••••••••••••••••••
•���••••••••••••••w••••••!1••AQ•aq•"2�••B���� #3R�•br�0 -
How hard would it be to take a well-linked site, completely change the subject matter & still retain link authority?
So, this would be taking a domain with a domain authority of 50 (200 root domains, 3500 total links) and, for fictitious example, going from a subject matter like "Online Deals" to "The History Of Dentistry"... just totally unrelated new subject for the old/re-purposed domain. The old content goes away entirely. The domain name itself is a super vague .com name and has no exact match to anything either way. I'm wondering, if the DNS changed to different servers, it went from 1000 pages to a blog, ownership/contacts stayed the same, the missing pages were 301'd to the homepage, how would that fare in Google for the new homepage focus and over what time frame? Assume the new terms are a reasonable match to the old domain authority and compete U.S.-wide... not local or international. Bonus points for answers from folks who have actually done this. Thanks... Darcy
Intermediate & Advanced SEO | | 945010 -
Google Generating its Own Page Titles
Hi There I have a question regarding Google generating its own page titles for some of the pages on my website. I know that Google sometimes takes your H1 tag and uses it as a page title, however, can anyone tell me how I can stop this from happening? Is there a meta tag I can use, for example like the NOODP tag? Or do I have to change my page title? Thanks Sadie
Intermediate & Advanced SEO | | dancape0 -
Google consolidating link juice on duplicate content pages
I've observed some strange findings on a website I am diagnosing and it has led me to a possible theory that seems to fly in the face of a lot of thinking: My theory is:
Intermediate & Advanced SEO | | James77
When google see's several duplicate content pages on a website, and decides to just show one version of the page, it at the same time agrigates the link juice pointing to all the duplicate pages, and ranks the 1 duplicate content page it decides to show as if all the link juice pointing to the duplicate versions were pointing to the 1 version. EG
Link X -> Duplicate Page A
Link Y -> Duplicate Page B Google decides Duplicate Page A is the one that is most important and applies the following formula to decide its rank. Link X + Link Y (Minus some dampening factor) -> Page A I came up with the idea after I seem to have reverse engineered this - IE the website I was trying to sort out for a client had this duplicate content, issue, so we decided to put unique content on Page A and Page B (not just one page like this but many). Bizarrely after about a week, all the Page A's dropped in rankings - indicating a possibility that the old link consolidation, may have been re-correctly associated with the two pages, so now Page A would only be getting Link Value X. Has anyone got any test/analysis to support or refute this??0