Best way to provide redis client for extensible go applications












0














I'm using redigo in an application and I'm wondering how my services should interface with Redis.



Wikipedia has this to say about thread-safety:




Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction.




What I interpret this to mean is that if a data-structure needs to be accessed by multiple clients (hundreds, thousands if not millions in today's micro-service world) thread-safety is the way in which we ensure that state is correctly preserved in the system regardless of which client accesses the data and when. This means resolving access priority (which client got there first), ensuring lock on mutation (only one client can write at a time) while promoting concurrency (many clients can read the data if there is no change).



From what I've gathered, a redigo client can be used by multiple "goroutines" (or threads) concurrently. That leads me to believe that a singleton implementation like I'm familiar with in Java should suffice.



I see examples, e.g., here and here, where Redis connections (pools) are simply created in the main method and passed in to various redigo functions. That doesn't seem like the most robust way to get things done, although they do appear to be following the singleton pattern. (Understandably the second post is really just a quick n'dirty API.)



I would do it like this:




  1. In the main function call init which returns a redigo pool.


  2. Create handler functions (controllers) that accept a pool as a param (a sort of "dirty" dependency injection).



This would (I think) ensure that only a single pool is ever created.



Alternatively, is there any reason why I can't create a pool (client) every time I want to access the data store? If the client is killed after the transaction is complete, is there any issue with spinning up a new pool every time a handler receives a request?










share|improve this question


















  • 2




    It's really inefficient. The entire point of a connection pool is to re-use connections and avoid the overhead involved in establishing a new connection. en.wikipedia.org/wiki/Connection_pool
    – Adrian
    Nov 12 at 15:03










  • Cool. So singleton access is the way to go.
    – franklin
    Nov 12 at 15:04






  • 2




    Redigo's pool documentation describes how to use a pool in a web application. You should create a single pool and share it.
    – ThunderCat
    Nov 12 at 15:32








  • 1




    See also Redigo's documentation on allowed concurrency. Redigo does not have a "client" type as mentioned in question. Redigo does have a threadsafe pool and a partially unsafe connection.
    – ThunderCat
    Nov 12 at 15:57


















0














I'm using redigo in an application and I'm wondering how my services should interface with Redis.



Wikipedia has this to say about thread-safety:




Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction.




What I interpret this to mean is that if a data-structure needs to be accessed by multiple clients (hundreds, thousands if not millions in today's micro-service world) thread-safety is the way in which we ensure that state is correctly preserved in the system regardless of which client accesses the data and when. This means resolving access priority (which client got there first), ensuring lock on mutation (only one client can write at a time) while promoting concurrency (many clients can read the data if there is no change).



From what I've gathered, a redigo client can be used by multiple "goroutines" (or threads) concurrently. That leads me to believe that a singleton implementation like I'm familiar with in Java should suffice.



I see examples, e.g., here and here, where Redis connections (pools) are simply created in the main method and passed in to various redigo functions. That doesn't seem like the most robust way to get things done, although they do appear to be following the singleton pattern. (Understandably the second post is really just a quick n'dirty API.)



I would do it like this:




  1. In the main function call init which returns a redigo pool.


  2. Create handler functions (controllers) that accept a pool as a param (a sort of "dirty" dependency injection).



This would (I think) ensure that only a single pool is ever created.



Alternatively, is there any reason why I can't create a pool (client) every time I want to access the data store? If the client is killed after the transaction is complete, is there any issue with spinning up a new pool every time a handler receives a request?










share|improve this question


















  • 2




    It's really inefficient. The entire point of a connection pool is to re-use connections and avoid the overhead involved in establishing a new connection. en.wikipedia.org/wiki/Connection_pool
    – Adrian
    Nov 12 at 15:03










  • Cool. So singleton access is the way to go.
    – franklin
    Nov 12 at 15:04






  • 2




    Redigo's pool documentation describes how to use a pool in a web application. You should create a single pool and share it.
    – ThunderCat
    Nov 12 at 15:32








  • 1




    See also Redigo's documentation on allowed concurrency. Redigo does not have a "client" type as mentioned in question. Redigo does have a threadsafe pool and a partially unsafe connection.
    – ThunderCat
    Nov 12 at 15:57
















0












0








0







I'm using redigo in an application and I'm wondering how my services should interface with Redis.



Wikipedia has this to say about thread-safety:




Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction.




What I interpret this to mean is that if a data-structure needs to be accessed by multiple clients (hundreds, thousands if not millions in today's micro-service world) thread-safety is the way in which we ensure that state is correctly preserved in the system regardless of which client accesses the data and when. This means resolving access priority (which client got there first), ensuring lock on mutation (only one client can write at a time) while promoting concurrency (many clients can read the data if there is no change).



From what I've gathered, a redigo client can be used by multiple "goroutines" (or threads) concurrently. That leads me to believe that a singleton implementation like I'm familiar with in Java should suffice.



I see examples, e.g., here and here, where Redis connections (pools) are simply created in the main method and passed in to various redigo functions. That doesn't seem like the most robust way to get things done, although they do appear to be following the singleton pattern. (Understandably the second post is really just a quick n'dirty API.)



I would do it like this:




  1. In the main function call init which returns a redigo pool.


  2. Create handler functions (controllers) that accept a pool as a param (a sort of "dirty" dependency injection).



This would (I think) ensure that only a single pool is ever created.



Alternatively, is there any reason why I can't create a pool (client) every time I want to access the data store? If the client is killed after the transaction is complete, is there any issue with spinning up a new pool every time a handler receives a request?










share|improve this question













I'm using redigo in an application and I'm wondering how my services should interface with Redis.



Wikipedia has this to say about thread-safety:




Thread safety is a computer programming concept applicable to multi-threaded code. Thread-safe code only manipulates shared data structures in a manner that ensures that all threads behave properly and fulfill their design specifications without unintended interaction.




What I interpret this to mean is that if a data-structure needs to be accessed by multiple clients (hundreds, thousands if not millions in today's micro-service world) thread-safety is the way in which we ensure that state is correctly preserved in the system regardless of which client accesses the data and when. This means resolving access priority (which client got there first), ensuring lock on mutation (only one client can write at a time) while promoting concurrency (many clients can read the data if there is no change).



From what I've gathered, a redigo client can be used by multiple "goroutines" (or threads) concurrently. That leads me to believe that a singleton implementation like I'm familiar with in Java should suffice.



I see examples, e.g., here and here, where Redis connections (pools) are simply created in the main method and passed in to various redigo functions. That doesn't seem like the most robust way to get things done, although they do appear to be following the singleton pattern. (Understandably the second post is really just a quick n'dirty API.)



I would do it like this:




  1. In the main function call init which returns a redigo pool.


  2. Create handler functions (controllers) that accept a pool as a param (a sort of "dirty" dependency injection).



This would (I think) ensure that only a single pool is ever created.



Alternatively, is there any reason why I can't create a pool (client) every time I want to access the data store? If the client is killed after the transaction is complete, is there any issue with spinning up a new pool every time a handler receives a request?







go redis redigo






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 12 at 15:01









franklin

81842447




81842447








  • 2




    It's really inefficient. The entire point of a connection pool is to re-use connections and avoid the overhead involved in establishing a new connection. en.wikipedia.org/wiki/Connection_pool
    – Adrian
    Nov 12 at 15:03










  • Cool. So singleton access is the way to go.
    – franklin
    Nov 12 at 15:04






  • 2




    Redigo's pool documentation describes how to use a pool in a web application. You should create a single pool and share it.
    – ThunderCat
    Nov 12 at 15:32








  • 1




    See also Redigo's documentation on allowed concurrency. Redigo does not have a "client" type as mentioned in question. Redigo does have a threadsafe pool and a partially unsafe connection.
    – ThunderCat
    Nov 12 at 15:57
















  • 2




    It's really inefficient. The entire point of a connection pool is to re-use connections and avoid the overhead involved in establishing a new connection. en.wikipedia.org/wiki/Connection_pool
    – Adrian
    Nov 12 at 15:03










  • Cool. So singleton access is the way to go.
    – franklin
    Nov 12 at 15:04






  • 2




    Redigo's pool documentation describes how to use a pool in a web application. You should create a single pool and share it.
    – ThunderCat
    Nov 12 at 15:32








  • 1




    See also Redigo's documentation on allowed concurrency. Redigo does not have a "client" type as mentioned in question. Redigo does have a threadsafe pool and a partially unsafe connection.
    – ThunderCat
    Nov 12 at 15:57










2




2




It's really inefficient. The entire point of a connection pool is to re-use connections and avoid the overhead involved in establishing a new connection. en.wikipedia.org/wiki/Connection_pool
– Adrian
Nov 12 at 15:03




It's really inefficient. The entire point of a connection pool is to re-use connections and avoid the overhead involved in establishing a new connection. en.wikipedia.org/wiki/Connection_pool
– Adrian
Nov 12 at 15:03












Cool. So singleton access is the way to go.
– franklin
Nov 12 at 15:04




Cool. So singleton access is the way to go.
– franklin
Nov 12 at 15:04




2




2




Redigo's pool documentation describes how to use a pool in a web application. You should create a single pool and share it.
– ThunderCat
Nov 12 at 15:32






Redigo's pool documentation describes how to use a pool in a web application. You should create a single pool and share it.
– ThunderCat
Nov 12 at 15:32






1




1




See also Redigo's documentation on allowed concurrency. Redigo does not have a "client" type as mentioned in question. Redigo does have a threadsafe pool and a partially unsafe connection.
– ThunderCat
Nov 12 at 15:57






See also Redigo's documentation on allowed concurrency. Redigo does not have a "client" type as mentioned in question. Redigo does have a threadsafe pool and a partially unsafe connection.
– ThunderCat
Nov 12 at 15:57














1 Answer
1






active

oldest

votes


















1














Correct answer is already provided in comments, although I still want to add my 5 cents.



Your question mixes up two concepts - concurrency and resource pool.



Concurrent code makes sure that several execution flows safely work inside the same application using shared memory.
In our example we have multiple http requests from our users and handlers code for these requests that is executed concurrently. DB connection is part of that flow and required for request execution.



Concurrent code should be abble to use DB connection and open/close connection if necessary.



Although, concurrency has nothing to do with rules defining how many connections should be opened at any given time and how these connections should be reused/shared:




  • Someone can create DB connection per request and that code is
    concurrent.

  • Someone can use some shared DB connection(s) and that
    code is concurrent as long as multiple requests does not interfere
    with each other. What is typically achieved with memory locks.


Connection Pool, from other side, is pattern that provides efficient way to handle DB connections.
Why managing connections lifetime is important:




  1. Connection is expensive to open

  2. Connection expensive to keep alive without using as database can keep only limited number of open connections.


With connection pool you can control:




  1. How many connection are always open - warm up slow resource

  2. Max number of open connection - prevent opening too many connections

  3. Connection timeout - balancing between connections reuse and releasing not used resources


Maintaining connection or connection pool per request does not allow efficient connections reuse. Every request is slowed down with connection opening overhead, spike in traffic will cause opening too many connections and etc.



Typically application has one connection pool shared between all requests.



Sometimes, developers create several pools for different types of connections. For example one pool for transactional operations and one for reports.






share|improve this answer





















    Your Answer






    StackExchange.ifUsing("editor", function () {
    StackExchange.using("externalEditor", function () {
    StackExchange.using("snippets", function () {
    StackExchange.snippets.init();
    });
    });
    }, "code-snippets");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "1"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53264855%2fbest-way-to-provide-redis-client-for-extensible-go-applications%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1














    Correct answer is already provided in comments, although I still want to add my 5 cents.



    Your question mixes up two concepts - concurrency and resource pool.



    Concurrent code makes sure that several execution flows safely work inside the same application using shared memory.
    In our example we have multiple http requests from our users and handlers code for these requests that is executed concurrently. DB connection is part of that flow and required for request execution.



    Concurrent code should be abble to use DB connection and open/close connection if necessary.



    Although, concurrency has nothing to do with rules defining how many connections should be opened at any given time and how these connections should be reused/shared:




    • Someone can create DB connection per request and that code is
      concurrent.

    • Someone can use some shared DB connection(s) and that
      code is concurrent as long as multiple requests does not interfere
      with each other. What is typically achieved with memory locks.


    Connection Pool, from other side, is pattern that provides efficient way to handle DB connections.
    Why managing connections lifetime is important:




    1. Connection is expensive to open

    2. Connection expensive to keep alive without using as database can keep only limited number of open connections.


    With connection pool you can control:




    1. How many connection are always open - warm up slow resource

    2. Max number of open connection - prevent opening too many connections

    3. Connection timeout - balancing between connections reuse and releasing not used resources


    Maintaining connection or connection pool per request does not allow efficient connections reuse. Every request is slowed down with connection opening overhead, spike in traffic will cause opening too many connections and etc.



    Typically application has one connection pool shared between all requests.



    Sometimes, developers create several pools for different types of connections. For example one pool for transactional operations and one for reports.






    share|improve this answer


























      1














      Correct answer is already provided in comments, although I still want to add my 5 cents.



      Your question mixes up two concepts - concurrency and resource pool.



      Concurrent code makes sure that several execution flows safely work inside the same application using shared memory.
      In our example we have multiple http requests from our users and handlers code for these requests that is executed concurrently. DB connection is part of that flow and required for request execution.



      Concurrent code should be abble to use DB connection and open/close connection if necessary.



      Although, concurrency has nothing to do with rules defining how many connections should be opened at any given time and how these connections should be reused/shared:




      • Someone can create DB connection per request and that code is
        concurrent.

      • Someone can use some shared DB connection(s) and that
        code is concurrent as long as multiple requests does not interfere
        with each other. What is typically achieved with memory locks.


      Connection Pool, from other side, is pattern that provides efficient way to handle DB connections.
      Why managing connections lifetime is important:




      1. Connection is expensive to open

      2. Connection expensive to keep alive without using as database can keep only limited number of open connections.


      With connection pool you can control:




      1. How many connection are always open - warm up slow resource

      2. Max number of open connection - prevent opening too many connections

      3. Connection timeout - balancing between connections reuse and releasing not used resources


      Maintaining connection or connection pool per request does not allow efficient connections reuse. Every request is slowed down with connection opening overhead, spike in traffic will cause opening too many connections and etc.



      Typically application has one connection pool shared between all requests.



      Sometimes, developers create several pools for different types of connections. For example one pool for transactional operations and one for reports.






      share|improve this answer
























        1












        1








        1






        Correct answer is already provided in comments, although I still want to add my 5 cents.



        Your question mixes up two concepts - concurrency and resource pool.



        Concurrent code makes sure that several execution flows safely work inside the same application using shared memory.
        In our example we have multiple http requests from our users and handlers code for these requests that is executed concurrently. DB connection is part of that flow and required for request execution.



        Concurrent code should be abble to use DB connection and open/close connection if necessary.



        Although, concurrency has nothing to do with rules defining how many connections should be opened at any given time and how these connections should be reused/shared:




        • Someone can create DB connection per request and that code is
          concurrent.

        • Someone can use some shared DB connection(s) and that
          code is concurrent as long as multiple requests does not interfere
          with each other. What is typically achieved with memory locks.


        Connection Pool, from other side, is pattern that provides efficient way to handle DB connections.
        Why managing connections lifetime is important:




        1. Connection is expensive to open

        2. Connection expensive to keep alive without using as database can keep only limited number of open connections.


        With connection pool you can control:




        1. How many connection are always open - warm up slow resource

        2. Max number of open connection - prevent opening too many connections

        3. Connection timeout - balancing between connections reuse and releasing not used resources


        Maintaining connection or connection pool per request does not allow efficient connections reuse. Every request is slowed down with connection opening overhead, spike in traffic will cause opening too many connections and etc.



        Typically application has one connection pool shared between all requests.



        Sometimes, developers create several pools for different types of connections. For example one pool for transactional operations and one for reports.






        share|improve this answer












        Correct answer is already provided in comments, although I still want to add my 5 cents.



        Your question mixes up two concepts - concurrency and resource pool.



        Concurrent code makes sure that several execution flows safely work inside the same application using shared memory.
        In our example we have multiple http requests from our users and handlers code for these requests that is executed concurrently. DB connection is part of that flow and required for request execution.



        Concurrent code should be abble to use DB connection and open/close connection if necessary.



        Although, concurrency has nothing to do with rules defining how many connections should be opened at any given time and how these connections should be reused/shared:




        • Someone can create DB connection per request and that code is
          concurrent.

        • Someone can use some shared DB connection(s) and that
          code is concurrent as long as multiple requests does not interfere
          with each other. What is typically achieved with memory locks.


        Connection Pool, from other side, is pattern that provides efficient way to handle DB connections.
        Why managing connections lifetime is important:




        1. Connection is expensive to open

        2. Connection expensive to keep alive without using as database can keep only limited number of open connections.


        With connection pool you can control:




        1. How many connection are always open - warm up slow resource

        2. Max number of open connection - prevent opening too many connections

        3. Connection timeout - balancing between connections reuse and releasing not used resources


        Maintaining connection or connection pool per request does not allow efficient connections reuse. Every request is slowed down with connection opening overhead, spike in traffic will cause opening too many connections and etc.



        Typically application has one connection pool shared between all requests.



        Sometimes, developers create several pools for different types of connections. For example one pool for transactional operations and one for reports.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 12 at 19:40









        Dmitry Harnitski

        3,69711833




        3,69711833






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53264855%2fbest-way-to-provide-redis-client-for-extensible-go-applications%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Xamarin.iOS Cant Deploy on Iphone

            Glorious Revolution

            Dulmage-Mendelsohn matrix decomposition in Python