Links of a page and links of that subpages. Recursion/Threads












1














I'm making a function that downloads the content of a website, then I look for the links in the site and for each one I call recursevily the same function untill the 7th level. The problem is that this takes a lot of time so i was looking to use a threadpool for manage this calls but i dont know how exactly to divide this tasks into the threadpool.



This is my actual code, without the threadpool.



import requests
import re

url = 'https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8'


def searchLinks(url,level):
print("level: "+str(level))
if(level==3):
return 0

response = requests.get(url)
enlaces = re.findall(r'<a href="(.*?)"',str(response.text))

for en in enlaces:
if (en[0] == "/" or en[0]=="#"):
en= url+en[1:]
print(en)
searchLinks(en,level+1)


searchLinks(url,1)









share|improve this question



























    1














    I'm making a function that downloads the content of a website, then I look for the links in the site and for each one I call recursevily the same function untill the 7th level. The problem is that this takes a lot of time so i was looking to use a threadpool for manage this calls but i dont know how exactly to divide this tasks into the threadpool.



    This is my actual code, without the threadpool.



    import requests
    import re

    url = 'https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8'


    def searchLinks(url,level):
    print("level: "+str(level))
    if(level==3):
    return 0

    response = requests.get(url)
    enlaces = re.findall(r'<a href="(.*?)"',str(response.text))

    for en in enlaces:
    if (en[0] == "/" or en[0]=="#"):
    en= url+en[1:]
    print(en)
    searchLinks(en,level+1)


    searchLinks(url,1)









    share|improve this question

























      1












      1








      1







      I'm making a function that downloads the content of a website, then I look for the links in the site and for each one I call recursevily the same function untill the 7th level. The problem is that this takes a lot of time so i was looking to use a threadpool for manage this calls but i dont know how exactly to divide this tasks into the threadpool.



      This is my actual code, without the threadpool.



      import requests
      import re

      url = 'https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8'


      def searchLinks(url,level):
      print("level: "+str(level))
      if(level==3):
      return 0

      response = requests.get(url)
      enlaces = re.findall(r'<a href="(.*?)"',str(response.text))

      for en in enlaces:
      if (en[0] == "/" or en[0]=="#"):
      en= url+en[1:]
      print(en)
      searchLinks(en,level+1)


      searchLinks(url,1)









      share|improve this question













      I'm making a function that downloads the content of a website, then I look for the links in the site and for each one I call recursevily the same function untill the 7th level. The problem is that this takes a lot of time so i was looking to use a threadpool for manage this calls but i dont know how exactly to divide this tasks into the threadpool.



      This is my actual code, without the threadpool.



      import requests
      import re

      url = 'https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8'


      def searchLinks(url,level):
      print("level: "+str(level))
      if(level==3):
      return 0

      response = requests.get(url)
      enlaces = re.findall(r'<a href="(.*?)"',str(response.text))

      for en in enlaces:
      if (en[0] == "/" or en[0]=="#"):
      en= url+en[1:]
      print(en)
      searchLinks(en,level+1)


      searchLinks(url,1)






      python python-3.x multithreading recursion threadpool






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 20 at 22:20









      Ashel

      61




      61
























          1 Answer
          1






          active

          oldest

          votes


















          0














          You've got a lot of URLs here, so this is going to be a huge operation. For example, if each page has an average of only 10 links, you're looking at over 10 million requests if you want to recurse 7 layers deep.



          For starters, use an HTML parsing library like BeautifulSoup instead of regex. This will give you a performance boost right off the bat, although I haven't tested the exact amount. Avoid printing to stdout which will also slow down the works.



          As for threading, one approach is to use a work queue. Python's queue class is thread safe, so you can create a pool of worker threads that poll to retrieve URLs from the queue. When a thread gets a URL, it finds all the links on the page and appends the relevant URL (or page data, if you wish) to a global list (which is also a thread-safe operation). The URLs are enqueued on the work queue and the process continues. Threads exit when the specified level reaches 0.



          Another approach might be to scrape the first URL for all of its links, then create as many threads in a thread pool and let them each run on a separate tree of links. This would eliminate queue contention and reduce overhead.



          Either way, the idea is that threads will block waiting for request responses and allow the CPU to run a different thread to do work which will make up for the threading overhead (context switches, lock contention). If you'd like to run on multiple cores, read this blog post about the GIL and look into spawning processes.



          Here's some example code of the first approach:



          import queue
          import requests
          import threading
          import time
          from bs4 import BeautifulSoup


          def search_links(q, result):
          while 1:
          try:
          url, level = q.get()
          except queue.Empty:
          continue

          if not level:
          break

          try:
          for x in BeautifulSoup(requests.get(url).text, "lxml").find_all("a", href=True):
          link = x["href"]

          if link and link[0] in "#/":
          link = url + link[1:]

          result.append(link)
          q.put((link, level - 1))
          except requests.exceptions.InvalidSchema:
          pass


          levels = 2
          workers = 10
          start_url = "https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8"
          urls =
          threads =
          q = queue.Queue()
          q.put((start_url, levels))

          start = time.time()

          for i in range(workers):
          threads.append(threading.Thread(target=search_links, args=(q, urls)))
          threads[-1].daemon = True
          threads[-1].start()

          for thread in threads:
          thread.join()

          print("Found %d URLs using %d workers %d levels deep in %ds" % (len(urls), workers, levels, time.time() - start))

          #for url in urls:
          # print(url)


          A few sample runs on my not-especially-fast machine:



          > python thread_req.py
          Found 7733 URLs using 1 workers 2 levels deep in 112s
          > python thread_req.py
          Found 7729 URLs using 10 workers 2 levels deep in 27s
          > python thread_req.py
          Found 7731 URLs using 20 workers 2 levels deep in 25s


          That's a 4x performance boost on this small run. I ran into maximum request errors on larger runs, so this is just a toy example. Also worth noting is that with a queue, you'll perform a BFS rather than a DFS with recursion or a stack.



          Try it!






          share|improve this answer























            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53402464%2flinks-of-a-page-and-links-of-that-subpages-recursion-threads%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            You've got a lot of URLs here, so this is going to be a huge operation. For example, if each page has an average of only 10 links, you're looking at over 10 million requests if you want to recurse 7 layers deep.



            For starters, use an HTML parsing library like BeautifulSoup instead of regex. This will give you a performance boost right off the bat, although I haven't tested the exact amount. Avoid printing to stdout which will also slow down the works.



            As for threading, one approach is to use a work queue. Python's queue class is thread safe, so you can create a pool of worker threads that poll to retrieve URLs from the queue. When a thread gets a URL, it finds all the links on the page and appends the relevant URL (or page data, if you wish) to a global list (which is also a thread-safe operation). The URLs are enqueued on the work queue and the process continues. Threads exit when the specified level reaches 0.



            Another approach might be to scrape the first URL for all of its links, then create as many threads in a thread pool and let them each run on a separate tree of links. This would eliminate queue contention and reduce overhead.



            Either way, the idea is that threads will block waiting for request responses and allow the CPU to run a different thread to do work which will make up for the threading overhead (context switches, lock contention). If you'd like to run on multiple cores, read this blog post about the GIL and look into spawning processes.



            Here's some example code of the first approach:



            import queue
            import requests
            import threading
            import time
            from bs4 import BeautifulSoup


            def search_links(q, result):
            while 1:
            try:
            url, level = q.get()
            except queue.Empty:
            continue

            if not level:
            break

            try:
            for x in BeautifulSoup(requests.get(url).text, "lxml").find_all("a", href=True):
            link = x["href"]

            if link and link[0] in "#/":
            link = url + link[1:]

            result.append(link)
            q.put((link, level - 1))
            except requests.exceptions.InvalidSchema:
            pass


            levels = 2
            workers = 10
            start_url = "https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8"
            urls =
            threads =
            q = queue.Queue()
            q.put((start_url, levels))

            start = time.time()

            for i in range(workers):
            threads.append(threading.Thread(target=search_links, args=(q, urls)))
            threads[-1].daemon = True
            threads[-1].start()

            for thread in threads:
            thread.join()

            print("Found %d URLs using %d workers %d levels deep in %ds" % (len(urls), workers, levels, time.time() - start))

            #for url in urls:
            # print(url)


            A few sample runs on my not-especially-fast machine:



            > python thread_req.py
            Found 7733 URLs using 1 workers 2 levels deep in 112s
            > python thread_req.py
            Found 7729 URLs using 10 workers 2 levels deep in 27s
            > python thread_req.py
            Found 7731 URLs using 20 workers 2 levels deep in 25s


            That's a 4x performance boost on this small run. I ran into maximum request errors on larger runs, so this is just a toy example. Also worth noting is that with a queue, you'll perform a BFS rather than a DFS with recursion or a stack.



            Try it!






            share|improve this answer




























              0














              You've got a lot of URLs here, so this is going to be a huge operation. For example, if each page has an average of only 10 links, you're looking at over 10 million requests if you want to recurse 7 layers deep.



              For starters, use an HTML parsing library like BeautifulSoup instead of regex. This will give you a performance boost right off the bat, although I haven't tested the exact amount. Avoid printing to stdout which will also slow down the works.



              As for threading, one approach is to use a work queue. Python's queue class is thread safe, so you can create a pool of worker threads that poll to retrieve URLs from the queue. When a thread gets a URL, it finds all the links on the page and appends the relevant URL (or page data, if you wish) to a global list (which is also a thread-safe operation). The URLs are enqueued on the work queue and the process continues. Threads exit when the specified level reaches 0.



              Another approach might be to scrape the first URL for all of its links, then create as many threads in a thread pool and let them each run on a separate tree of links. This would eliminate queue contention and reduce overhead.



              Either way, the idea is that threads will block waiting for request responses and allow the CPU to run a different thread to do work which will make up for the threading overhead (context switches, lock contention). If you'd like to run on multiple cores, read this blog post about the GIL and look into spawning processes.



              Here's some example code of the first approach:



              import queue
              import requests
              import threading
              import time
              from bs4 import BeautifulSoup


              def search_links(q, result):
              while 1:
              try:
              url, level = q.get()
              except queue.Empty:
              continue

              if not level:
              break

              try:
              for x in BeautifulSoup(requests.get(url).text, "lxml").find_all("a", href=True):
              link = x["href"]

              if link and link[0] in "#/":
              link = url + link[1:]

              result.append(link)
              q.put((link, level - 1))
              except requests.exceptions.InvalidSchema:
              pass


              levels = 2
              workers = 10
              start_url = "https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8"
              urls =
              threads =
              q = queue.Queue()
              q.put((start_url, levels))

              start = time.time()

              for i in range(workers):
              threads.append(threading.Thread(target=search_links, args=(q, urls)))
              threads[-1].daemon = True
              threads[-1].start()

              for thread in threads:
              thread.join()

              print("Found %d URLs using %d workers %d levels deep in %ds" % (len(urls), workers, levels, time.time() - start))

              #for url in urls:
              # print(url)


              A few sample runs on my not-especially-fast machine:



              > python thread_req.py
              Found 7733 URLs using 1 workers 2 levels deep in 112s
              > python thread_req.py
              Found 7729 URLs using 10 workers 2 levels deep in 27s
              > python thread_req.py
              Found 7731 URLs using 20 workers 2 levels deep in 25s


              That's a 4x performance boost on this small run. I ran into maximum request errors on larger runs, so this is just a toy example. Also worth noting is that with a queue, you'll perform a BFS rather than a DFS with recursion or a stack.



              Try it!






              share|improve this answer


























                0












                0








                0






                You've got a lot of URLs here, so this is going to be a huge operation. For example, if each page has an average of only 10 links, you're looking at over 10 million requests if you want to recurse 7 layers deep.



                For starters, use an HTML parsing library like BeautifulSoup instead of regex. This will give you a performance boost right off the bat, although I haven't tested the exact amount. Avoid printing to stdout which will also slow down the works.



                As for threading, one approach is to use a work queue. Python's queue class is thread safe, so you can create a pool of worker threads that poll to retrieve URLs from the queue. When a thread gets a URL, it finds all the links on the page and appends the relevant URL (or page data, if you wish) to a global list (which is also a thread-safe operation). The URLs are enqueued on the work queue and the process continues. Threads exit when the specified level reaches 0.



                Another approach might be to scrape the first URL for all of its links, then create as many threads in a thread pool and let them each run on a separate tree of links. This would eliminate queue contention and reduce overhead.



                Either way, the idea is that threads will block waiting for request responses and allow the CPU to run a different thread to do work which will make up for the threading overhead (context switches, lock contention). If you'd like to run on multiple cores, read this blog post about the GIL and look into spawning processes.



                Here's some example code of the first approach:



                import queue
                import requests
                import threading
                import time
                from bs4 import BeautifulSoup


                def search_links(q, result):
                while 1:
                try:
                url, level = q.get()
                except queue.Empty:
                continue

                if not level:
                break

                try:
                for x in BeautifulSoup(requests.get(url).text, "lxml").find_all("a", href=True):
                link = x["href"]

                if link and link[0] in "#/":
                link = url + link[1:]

                result.append(link)
                q.put((link, level - 1))
                except requests.exceptions.InvalidSchema:
                pass


                levels = 2
                workers = 10
                start_url = "https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8"
                urls =
                threads =
                q = queue.Queue()
                q.put((start_url, levels))

                start = time.time()

                for i in range(workers):
                threads.append(threading.Thread(target=search_links, args=(q, urls)))
                threads[-1].daemon = True
                threads[-1].start()

                for thread in threads:
                thread.join()

                print("Found %d URLs using %d workers %d levels deep in %ds" % (len(urls), workers, levels, time.time() - start))

                #for url in urls:
                # print(url)


                A few sample runs on my not-especially-fast machine:



                > python thread_req.py
                Found 7733 URLs using 1 workers 2 levels deep in 112s
                > python thread_req.py
                Found 7729 URLs using 10 workers 2 levels deep in 27s
                > python thread_req.py
                Found 7731 URLs using 20 workers 2 levels deep in 25s


                That's a 4x performance boost on this small run. I ran into maximum request errors on larger runs, so this is just a toy example. Also worth noting is that with a queue, you'll perform a BFS rather than a DFS with recursion or a stack.



                Try it!






                share|improve this answer














                You've got a lot of URLs here, so this is going to be a huge operation. For example, if each page has an average of only 10 links, you're looking at over 10 million requests if you want to recurse 7 layers deep.



                For starters, use an HTML parsing library like BeautifulSoup instead of regex. This will give you a performance boost right off the bat, although I haven't tested the exact amount. Avoid printing to stdout which will also slow down the works.



                As for threading, one approach is to use a work queue. Python's queue class is thread safe, so you can create a pool of worker threads that poll to retrieve URLs from the queue. When a thread gets a URL, it finds all the links on the page and appends the relevant URL (or page data, if you wish) to a global list (which is also a thread-safe operation). The URLs are enqueued on the work queue and the process continues. Threads exit when the specified level reaches 0.



                Another approach might be to scrape the first URL for all of its links, then create as many threads in a thread pool and let them each run on a separate tree of links. This would eliminate queue contention and reduce overhead.



                Either way, the idea is that threads will block waiting for request responses and allow the CPU to run a different thread to do work which will make up for the threading overhead (context switches, lock contention). If you'd like to run on multiple cores, read this blog post about the GIL and look into spawning processes.



                Here's some example code of the first approach:



                import queue
                import requests
                import threading
                import time
                from bs4 import BeautifulSoup


                def search_links(q, result):
                while 1:
                try:
                url, level = q.get()
                except queue.Empty:
                continue

                if not level:
                break

                try:
                for x in BeautifulSoup(requests.get(url).text, "lxml").find_all("a", href=True):
                link = x["href"]

                if link and link[0] in "#/":
                link = url + link[1:]

                result.append(link)
                q.put((link, level - 1))
                except requests.exceptions.InvalidSchema:
                pass


                levels = 2
                workers = 10
                start_url = "https://masdemx.com/category/creatividad/?fbclid=IwAR0G2AQa7QUzI-fsgRn3VOl5oejXKlC_JlfvUGBJf9xjQ4gcBsyHinYiOt8"
                urls =
                threads =
                q = queue.Queue()
                q.put((start_url, levels))

                start = time.time()

                for i in range(workers):
                threads.append(threading.Thread(target=search_links, args=(q, urls)))
                threads[-1].daemon = True
                threads[-1].start()

                for thread in threads:
                thread.join()

                print("Found %d URLs using %d workers %d levels deep in %ds" % (len(urls), workers, levels, time.time() - start))

                #for url in urls:
                # print(url)


                A few sample runs on my not-especially-fast machine:



                > python thread_req.py
                Found 7733 URLs using 1 workers 2 levels deep in 112s
                > python thread_req.py
                Found 7729 URLs using 10 workers 2 levels deep in 27s
                > python thread_req.py
                Found 7731 URLs using 20 workers 2 levels deep in 25s


                That's a 4x performance boost on this small run. I ran into maximum request errors on larger runs, so this is just a toy example. Also worth noting is that with a queue, you'll perform a BFS rather than a DFS with recursion or a stack.



                Try it!







                share|improve this answer














                share|improve this answer



                share|improve this answer








                edited Nov 21 at 0:31

























                answered Nov 21 at 0:03









                ggorlen

                6,3963825




                6,3963825






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53402464%2flinks-of-a-page-and-links-of-that-subpages-recursion-threads%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Wiesbaden

                    Marschland

                    Dieringhausen