Is there any utils to control the amount of thread when executing binary?












1















make have -j flag, which makes make process more faster.

This flag tells make information of allowed to spawn the provided amount of 'threads'.


As the same manner, is there any simple way to apply -j-like option on normal execution?



For example, Assuming that I want execute my python script more faster.



$ python myprogram.py -j4 // <--?


Is there any useful utils in linux to control the amount of thread like -j do?










share|improve this question


















  • 1





    No. (This space intentionally left blank)

    – n.m.
    Nov 16 '18 at 8:25











  • Also, the -j option to make has nothing to do with threads.

    – Shawn
    Nov 16 '18 at 8:37
















1















make have -j flag, which makes make process more faster.

This flag tells make information of allowed to spawn the provided amount of 'threads'.


As the same manner, is there any simple way to apply -j-like option on normal execution?



For example, Assuming that I want execute my python script more faster.



$ python myprogram.py -j4 // <--?


Is there any useful utils in linux to control the amount of thread like -j do?










share|improve this question


















  • 1





    No. (This space intentionally left blank)

    – n.m.
    Nov 16 '18 at 8:25











  • Also, the -j option to make has nothing to do with threads.

    – Shawn
    Nov 16 '18 at 8:37














1












1








1


1






make have -j flag, which makes make process more faster.

This flag tells make information of allowed to spawn the provided amount of 'threads'.


As the same manner, is there any simple way to apply -j-like option on normal execution?



For example, Assuming that I want execute my python script more faster.



$ python myprogram.py -j4 // <--?


Is there any useful utils in linux to control the amount of thread like -j do?










share|improve this question














make have -j flag, which makes make process more faster.

This flag tells make information of allowed to spawn the provided amount of 'threads'.


As the same manner, is there any simple way to apply -j-like option on normal execution?



For example, Assuming that I want execute my python script more faster.



$ python myprogram.py -j4 // <--?


Is there any useful utils in linux to control the amount of thread like -j do?







linux multithreading optimization core execution






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 16 '18 at 8:21









JiwonJiwon

329214




329214








  • 1





    No. (This space intentionally left blank)

    – n.m.
    Nov 16 '18 at 8:25











  • Also, the -j option to make has nothing to do with threads.

    – Shawn
    Nov 16 '18 at 8:37














  • 1





    No. (This space intentionally left blank)

    – n.m.
    Nov 16 '18 at 8:25











  • Also, the -j option to make has nothing to do with threads.

    – Shawn
    Nov 16 '18 at 8:37








1




1





No. (This space intentionally left blank)

– n.m.
Nov 16 '18 at 8:25





No. (This space intentionally left blank)

– n.m.
Nov 16 '18 at 8:25













Also, the -j option to make has nothing to do with threads.

– Shawn
Nov 16 '18 at 8:37





Also, the -j option to make has nothing to do with threads.

– Shawn
Nov 16 '18 at 8:37












1 Answer
1






active

oldest

votes


















1














Parallelizing a program has to be done by the programmer, not the user.



make computes a dependency tree for the target. Most targets will depend on more than one input, like an executable file that's built from several parts, like .c files compiled into .o files. The developers of make understood this, and using the dependency tree, they wrote make so it can figure out which parts can be prepared independent of each other, and -j4 tells it to prepare 4 in parallel, for instance starting 4 compiler processes (not threads!) in parallel.



To accelerate your Python program, you yourself need to identify portions that can be executed independent of each other, which will totally depend on the specifics of the problem your Python program solves; there is no general solution, and many problems are very hard to parallelize.



Parallelization comes in two forms: processes and threads. Threads share their memory (except for the stack), and in Python are limited by the Global Interpreter Lock (GIL), so getting more computing power using threads is often not possible in Python. (The situation is different in C, C++ and Java, for instance, where threads can get you speedup.) Processes (as utilized by make) on the other hand have a much harder time talking to each other (using shared memory, semaphores, sockets etc), because they are truly independent of each other.



In Python, the modules multiprocessing and threading provide functionality for working with multiple processes and threads respectively.



Be advised that under Unix/Linux/POSIX, creating new processes from a program that has already created threads might easily give you deadlocks unless you are very careful.






share|improve this answer























    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333944%2fis-there-any-utils-to-control-the-amount-of-thread-when-executing-binary%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    Parallelizing a program has to be done by the programmer, not the user.



    make computes a dependency tree for the target. Most targets will depend on more than one input, like an executable file that's built from several parts, like .c files compiled into .o files. The developers of make understood this, and using the dependency tree, they wrote make so it can figure out which parts can be prepared independent of each other, and -j4 tells it to prepare 4 in parallel, for instance starting 4 compiler processes (not threads!) in parallel.



    To accelerate your Python program, you yourself need to identify portions that can be executed independent of each other, which will totally depend on the specifics of the problem your Python program solves; there is no general solution, and many problems are very hard to parallelize.



    Parallelization comes in two forms: processes and threads. Threads share their memory (except for the stack), and in Python are limited by the Global Interpreter Lock (GIL), so getting more computing power using threads is often not possible in Python. (The situation is different in C, C++ and Java, for instance, where threads can get you speedup.) Processes (as utilized by make) on the other hand have a much harder time talking to each other (using shared memory, semaphores, sockets etc), because they are truly independent of each other.



    In Python, the modules multiprocessing and threading provide functionality for working with multiple processes and threads respectively.



    Be advised that under Unix/Linux/POSIX, creating new processes from a program that has already created threads might easily give you deadlocks unless you are very careful.






    share|improve this answer




























      1














      Parallelizing a program has to be done by the programmer, not the user.



      make computes a dependency tree for the target. Most targets will depend on more than one input, like an executable file that's built from several parts, like .c files compiled into .o files. The developers of make understood this, and using the dependency tree, they wrote make so it can figure out which parts can be prepared independent of each other, and -j4 tells it to prepare 4 in parallel, for instance starting 4 compiler processes (not threads!) in parallel.



      To accelerate your Python program, you yourself need to identify portions that can be executed independent of each other, which will totally depend on the specifics of the problem your Python program solves; there is no general solution, and many problems are very hard to parallelize.



      Parallelization comes in two forms: processes and threads. Threads share their memory (except for the stack), and in Python are limited by the Global Interpreter Lock (GIL), so getting more computing power using threads is often not possible in Python. (The situation is different in C, C++ and Java, for instance, where threads can get you speedup.) Processes (as utilized by make) on the other hand have a much harder time talking to each other (using shared memory, semaphores, sockets etc), because they are truly independent of each other.



      In Python, the modules multiprocessing and threading provide functionality for working with multiple processes and threads respectively.



      Be advised that under Unix/Linux/POSIX, creating new processes from a program that has already created threads might easily give you deadlocks unless you are very careful.






      share|improve this answer


























        1












        1








        1







        Parallelizing a program has to be done by the programmer, not the user.



        make computes a dependency tree for the target. Most targets will depend on more than one input, like an executable file that's built from several parts, like .c files compiled into .o files. The developers of make understood this, and using the dependency tree, they wrote make so it can figure out which parts can be prepared independent of each other, and -j4 tells it to prepare 4 in parallel, for instance starting 4 compiler processes (not threads!) in parallel.



        To accelerate your Python program, you yourself need to identify portions that can be executed independent of each other, which will totally depend on the specifics of the problem your Python program solves; there is no general solution, and many problems are very hard to parallelize.



        Parallelization comes in two forms: processes and threads. Threads share their memory (except for the stack), and in Python are limited by the Global Interpreter Lock (GIL), so getting more computing power using threads is often not possible in Python. (The situation is different in C, C++ and Java, for instance, where threads can get you speedup.) Processes (as utilized by make) on the other hand have a much harder time talking to each other (using shared memory, semaphores, sockets etc), because they are truly independent of each other.



        In Python, the modules multiprocessing and threading provide functionality for working with multiple processes and threads respectively.



        Be advised that under Unix/Linux/POSIX, creating new processes from a program that has already created threads might easily give you deadlocks unless you are very careful.






        share|improve this answer













        Parallelizing a program has to be done by the programmer, not the user.



        make computes a dependency tree for the target. Most targets will depend on more than one input, like an executable file that's built from several parts, like .c files compiled into .o files. The developers of make understood this, and using the dependency tree, they wrote make so it can figure out which parts can be prepared independent of each other, and -j4 tells it to prepare 4 in parallel, for instance starting 4 compiler processes (not threads!) in parallel.



        To accelerate your Python program, you yourself need to identify portions that can be executed independent of each other, which will totally depend on the specifics of the problem your Python program solves; there is no general solution, and many problems are very hard to parallelize.



        Parallelization comes in two forms: processes and threads. Threads share their memory (except for the stack), and in Python are limited by the Global Interpreter Lock (GIL), so getting more computing power using threads is often not possible in Python. (The situation is different in C, C++ and Java, for instance, where threads can get you speedup.) Processes (as utilized by make) on the other hand have a much harder time talking to each other (using shared memory, semaphores, sockets etc), because they are truly independent of each other.



        In Python, the modules multiprocessing and threading provide functionality for working with multiple processes and threads respectively.



        Be advised that under Unix/Linux/POSIX, creating new processes from a program that has already created threads might easily give you deadlocks unless you are very careful.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 16 '18 at 21:18









        digitalarbeiterdigitalarbeiter

        1,7101112




        1,7101112
































            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53333944%2fis-there-any-utils-to-control-the-amount-of-thread-when-executing-binary%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Bressuire

            Vorschmack

            Quarantine