Microservice architecture with ONE Websocket connection with every browser












0















Following the typical microservice REST architecture
where multiple servers run up and expose different controllers , providing services for each feature individually.



My question is this:



Assuming my business logic = A realtime web application which requires real time computing and real time responsiveness, where multiple clients in the application communicating with each-other.



My options are limited to only using websocket connection between every browser and to have mediator servers connecting between them.



But, the architecture is abit obscured to me, since I'm not interested in a monolith mediator!



If I follow the REST microservice architecture, I'll force every browser to open multiple/alot-of socket connections which isn't my goal



My approach is to consume all socket events through ONE socket connection from each client and in the backend realm, deal with it



My imagination takes me further to imagine an architecture of multiple microservices as the following:




  1. socket handler service

  2. feature 1 service

  3. feature 2 service


all connected with internal sockets as if like one big backend mesh



But that would fail since I'm in need for scaling out...
scaling every feature backend server to support millions of requests per second..



so that will bring me to maintain clusters of each that correlate with eachother?



By reading this , you might possibly understand the reason for this topic
Need some architectural thoughts.



My pursue for high maintainability and performance want a sophisticated architecture
but the more I think about it the more I go back to the monolith approach.



Is there any recommended architecture?










share|improve this question





























    0















    Following the typical microservice REST architecture
    where multiple servers run up and expose different controllers , providing services for each feature individually.



    My question is this:



    Assuming my business logic = A realtime web application which requires real time computing and real time responsiveness, where multiple clients in the application communicating with each-other.



    My options are limited to only using websocket connection between every browser and to have mediator servers connecting between them.



    But, the architecture is abit obscured to me, since I'm not interested in a monolith mediator!



    If I follow the REST microservice architecture, I'll force every browser to open multiple/alot-of socket connections which isn't my goal



    My approach is to consume all socket events through ONE socket connection from each client and in the backend realm, deal with it



    My imagination takes me further to imagine an architecture of multiple microservices as the following:




    1. socket handler service

    2. feature 1 service

    3. feature 2 service


    all connected with internal sockets as if like one big backend mesh



    But that would fail since I'm in need for scaling out...
    scaling every feature backend server to support millions of requests per second..



    so that will bring me to maintain clusters of each that correlate with eachother?



    By reading this , you might possibly understand the reason for this topic
    Need some architectural thoughts.



    My pursue for high maintainability and performance want a sophisticated architecture
    but the more I think about it the more I go back to the monolith approach.



    Is there any recommended architecture?










    share|improve this question



























      0












      0








      0








      Following the typical microservice REST architecture
      where multiple servers run up and expose different controllers , providing services for each feature individually.



      My question is this:



      Assuming my business logic = A realtime web application which requires real time computing and real time responsiveness, where multiple clients in the application communicating with each-other.



      My options are limited to only using websocket connection between every browser and to have mediator servers connecting between them.



      But, the architecture is abit obscured to me, since I'm not interested in a monolith mediator!



      If I follow the REST microservice architecture, I'll force every browser to open multiple/alot-of socket connections which isn't my goal



      My approach is to consume all socket events through ONE socket connection from each client and in the backend realm, deal with it



      My imagination takes me further to imagine an architecture of multiple microservices as the following:




      1. socket handler service

      2. feature 1 service

      3. feature 2 service


      all connected with internal sockets as if like one big backend mesh



      But that would fail since I'm in need for scaling out...
      scaling every feature backend server to support millions of requests per second..



      so that will bring me to maintain clusters of each that correlate with eachother?



      By reading this , you might possibly understand the reason for this topic
      Need some architectural thoughts.



      My pursue for high maintainability and performance want a sophisticated architecture
      but the more I think about it the more I go back to the monolith approach.



      Is there any recommended architecture?










      share|improve this question
















      Following the typical microservice REST architecture
      where multiple servers run up and expose different controllers , providing services for each feature individually.



      My question is this:



      Assuming my business logic = A realtime web application which requires real time computing and real time responsiveness, where multiple clients in the application communicating with each-other.



      My options are limited to only using websocket connection between every browser and to have mediator servers connecting between them.



      But, the architecture is abit obscured to me, since I'm not interested in a monolith mediator!



      If I follow the REST microservice architecture, I'll force every browser to open multiple/alot-of socket connections which isn't my goal



      My approach is to consume all socket events through ONE socket connection from each client and in the backend realm, deal with it



      My imagination takes me further to imagine an architecture of multiple microservices as the following:




      1. socket handler service

      2. feature 1 service

      3. feature 2 service


      all connected with internal sockets as if like one big backend mesh



      But that would fail since I'm in need for scaling out...
      scaling every feature backend server to support millions of requests per second..



      so that will bring me to maintain clusters of each that correlate with eachother?



      By reading this , you might possibly understand the reason for this topic
      Need some architectural thoughts.



      My pursue for high maintainability and performance want a sophisticated architecture
      but the more I think about it the more I go back to the monolith approach.



      Is there any recommended architecture?







      websocket architecture microservices






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Aug 3 '18 at 13:25







      USS-Montana

















      asked Aug 3 '18 at 13:20









      USS-MontanaUSS-Montana

      58110




      58110
























          2 Answers
          2






          active

          oldest

          votes


















          0














          Not sure about a recommended architecture, but i'll give my thoughts on it since i've been battling with similar architectural decisions.



          On the frontend, let's say you're handling 300k users, assuming a single server can handle 5k socket connections, you'll have 60 servers sitting behind a load balancer. Each of those 60 servers will roughly have 5k socket connections open, if the user refreshes his browser, he'll get a new socket connection to any of the 60 servers.



          Each of these 60 servers are connected to a Kafka cluster



          Upon connecting to any of these 60 servers, you would return some kind of an identification token, a GUID or something (309245f8-05bd-4221-864f-1a0c42b82133
          ), then that server would broadcast via Kafka to all other 59 servers that GUID 309245f8-05bd-4221-864f-1a0c42b82133 is connected to itself and each of those 59 servers would update their internal registry to take note that 309245f8-05bd-4221-864f-1a0c42b82133 belongs to Server1.



          You need to decide what happens when a user refresh, does he lose existing messages or do you want to retain those messages?
          If the user should continue receiving message after refreshing even though the user is now connected to a new server, the browser needs to store that GUID in a Cookie or something, upon connecting to the new server, that new server will broadcast to all other 59 servers that 309245f8-05bd-4221-864f-1a0c42b82133 now belongs to Server2 and Server1 will update itself to take note of it.



          Storing the GUID in the frontend, you need to take security in account, if somebody hijacks that GUID, they can intercept your requests, so be sure to make Cookies HTTP Only, Secure and setup the relevant CORS settings.



          Your backend will be servers listening to messages from Kafka, you can have as many services as you want in this fashion, if one server struggles to keep up, simply spin up more instances, from 1 instance to 2 instances, your processing capacity doubles (as an example). Each of these backend instances will keep track of the same registry the frontend has, only, instead of tracking which socket is connected to which frontend instance via GUID, the backend will track which frontend instance handles which GUID.



          Upon receiving a message via the socket, Server2 will publish a message via Kafka where any number of backend instances can pick up the message and process it. Included with that message is the GUID, so if a response needs to come back, the backend will simply send back a message marked with that GUID and the correct frontend server will pick it up and send a response message via the socket back to the browser client.



          If any of the 60 frontend instances goes offline, the websocket should reconnect to any of the remaining instances, the backend should be notified that those 5k GUIDs have moved to other servers. In the event that messages reach the wrong server, the frontend instances should send back that message to the backend with re-routing instructions.



          Kafka being just one of many possible solutions, you can use RabbitMQ or any other queuing system or build one yourself. The messaging queue should be highly available and autoscale as needed and should at no point lose messages.



          So in short, many frontend instances behind a load balancer using a messaging queue to sync between themselves and to talk to backend instances which has access to databases and integrations.






          share|improve this answer































            0














            Just thinking about things in the same realm.
            Actually I think your thoughts are quite sophisticated and currently I don't see a real Problem.



            So maybe for synchronisation or clarification in case I misinterpreted.
            My approach would look like:





            Client - Websocket -> FDS Loadbalancer -> Feature Dispatcher Service (FDS)



            FDS - Websocket -> FS1 Loadbalancer -> Feature Service 1 (FS 1)



            FDS - Websocket -> FS2 Loadbalancer -> Feature Service 2 (FS 2)





            -> Client speaks to a frontface loadbalancer. So you might spin up many FDSs.



            -> Each client then has one persistent connection to exactly 1 FDS.



            -> For each client the corresponding FDS would have one persistent connection for every subsequent FS.



            -> Also those final FSs are reached through a frontface loadbalancer.
            So you also might spin up many of those.





            Currently I think this is a good solution which is scalable and keeps every part quite simple.



            The only thing which is not that simple is how the loadbalancer makes the balancing decisions for the FSs. (Quantity of connections vs. busyness of FS)






            share|improve this answer

























              Your Answer






              StackExchange.ifUsing("editor", function () {
              StackExchange.using("externalEditor", function () {
              StackExchange.using("snippets", function () {
              StackExchange.snippets.init();
              });
              });
              }, "code-snippets");

              StackExchange.ready(function() {
              var channelOptions = {
              tags: "".split(" "),
              id: "1"
              };
              initTagRenderer("".split(" "), "".split(" "), channelOptions);

              StackExchange.using("externalEditor", function() {
              // Have to fire editor after snippets, if snippets enabled
              if (StackExchange.settings.snippets.snippetsEnabled) {
              StackExchange.using("snippets", function() {
              createEditor();
              });
              }
              else {
              createEditor();
              }
              });

              function createEditor() {
              StackExchange.prepareEditor({
              heartbeatType: 'answer',
              autoActivateHeartbeat: false,
              convertImagesToLinks: true,
              noModals: true,
              showLowRepImageUploadWarning: true,
              reputationToPostImages: 10,
              bindNavPrevention: true,
              postfix: "",
              imageUploader: {
              brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
              contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
              allowUrls: true
              },
              onDemand: true,
              discardSelector: ".discard-answer"
              ,immediatelyShowMarkdownHelp:true
              });


              }
              });














              draft saved

              draft discarded


















              StackExchange.ready(
              function () {
              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f51673608%2fmicroservice-architecture-with-one-websocket-connection-with-every-browser%23new-answer', 'question_page');
              }
              );

              Post as a guest















              Required, but never shown

























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              0














              Not sure about a recommended architecture, but i'll give my thoughts on it since i've been battling with similar architectural decisions.



              On the frontend, let's say you're handling 300k users, assuming a single server can handle 5k socket connections, you'll have 60 servers sitting behind a load balancer. Each of those 60 servers will roughly have 5k socket connections open, if the user refreshes his browser, he'll get a new socket connection to any of the 60 servers.



              Each of these 60 servers are connected to a Kafka cluster



              Upon connecting to any of these 60 servers, you would return some kind of an identification token, a GUID or something (309245f8-05bd-4221-864f-1a0c42b82133
              ), then that server would broadcast via Kafka to all other 59 servers that GUID 309245f8-05bd-4221-864f-1a0c42b82133 is connected to itself and each of those 59 servers would update their internal registry to take note that 309245f8-05bd-4221-864f-1a0c42b82133 belongs to Server1.



              You need to decide what happens when a user refresh, does he lose existing messages or do you want to retain those messages?
              If the user should continue receiving message after refreshing even though the user is now connected to a new server, the browser needs to store that GUID in a Cookie or something, upon connecting to the new server, that new server will broadcast to all other 59 servers that 309245f8-05bd-4221-864f-1a0c42b82133 now belongs to Server2 and Server1 will update itself to take note of it.



              Storing the GUID in the frontend, you need to take security in account, if somebody hijacks that GUID, they can intercept your requests, so be sure to make Cookies HTTP Only, Secure and setup the relevant CORS settings.



              Your backend will be servers listening to messages from Kafka, you can have as many services as you want in this fashion, if one server struggles to keep up, simply spin up more instances, from 1 instance to 2 instances, your processing capacity doubles (as an example). Each of these backend instances will keep track of the same registry the frontend has, only, instead of tracking which socket is connected to which frontend instance via GUID, the backend will track which frontend instance handles which GUID.



              Upon receiving a message via the socket, Server2 will publish a message via Kafka where any number of backend instances can pick up the message and process it. Included with that message is the GUID, so if a response needs to come back, the backend will simply send back a message marked with that GUID and the correct frontend server will pick it up and send a response message via the socket back to the browser client.



              If any of the 60 frontend instances goes offline, the websocket should reconnect to any of the remaining instances, the backend should be notified that those 5k GUIDs have moved to other servers. In the event that messages reach the wrong server, the frontend instances should send back that message to the backend with re-routing instructions.



              Kafka being just one of many possible solutions, you can use RabbitMQ or any other queuing system or build one yourself. The messaging queue should be highly available and autoscale as needed and should at no point lose messages.



              So in short, many frontend instances behind a load balancer using a messaging queue to sync between themselves and to talk to backend instances which has access to databases and integrations.






              share|improve this answer




























                0














                Not sure about a recommended architecture, but i'll give my thoughts on it since i've been battling with similar architectural decisions.



                On the frontend, let's say you're handling 300k users, assuming a single server can handle 5k socket connections, you'll have 60 servers sitting behind a load balancer. Each of those 60 servers will roughly have 5k socket connections open, if the user refreshes his browser, he'll get a new socket connection to any of the 60 servers.



                Each of these 60 servers are connected to a Kafka cluster



                Upon connecting to any of these 60 servers, you would return some kind of an identification token, a GUID or something (309245f8-05bd-4221-864f-1a0c42b82133
                ), then that server would broadcast via Kafka to all other 59 servers that GUID 309245f8-05bd-4221-864f-1a0c42b82133 is connected to itself and each of those 59 servers would update their internal registry to take note that 309245f8-05bd-4221-864f-1a0c42b82133 belongs to Server1.



                You need to decide what happens when a user refresh, does he lose existing messages or do you want to retain those messages?
                If the user should continue receiving message after refreshing even though the user is now connected to a new server, the browser needs to store that GUID in a Cookie or something, upon connecting to the new server, that new server will broadcast to all other 59 servers that 309245f8-05bd-4221-864f-1a0c42b82133 now belongs to Server2 and Server1 will update itself to take note of it.



                Storing the GUID in the frontend, you need to take security in account, if somebody hijacks that GUID, they can intercept your requests, so be sure to make Cookies HTTP Only, Secure and setup the relevant CORS settings.



                Your backend will be servers listening to messages from Kafka, you can have as many services as you want in this fashion, if one server struggles to keep up, simply spin up more instances, from 1 instance to 2 instances, your processing capacity doubles (as an example). Each of these backend instances will keep track of the same registry the frontend has, only, instead of tracking which socket is connected to which frontend instance via GUID, the backend will track which frontend instance handles which GUID.



                Upon receiving a message via the socket, Server2 will publish a message via Kafka where any number of backend instances can pick up the message and process it. Included with that message is the GUID, so if a response needs to come back, the backend will simply send back a message marked with that GUID and the correct frontend server will pick it up and send a response message via the socket back to the browser client.



                If any of the 60 frontend instances goes offline, the websocket should reconnect to any of the remaining instances, the backend should be notified that those 5k GUIDs have moved to other servers. In the event that messages reach the wrong server, the frontend instances should send back that message to the backend with re-routing instructions.



                Kafka being just one of many possible solutions, you can use RabbitMQ or any other queuing system or build one yourself. The messaging queue should be highly available and autoscale as needed and should at no point lose messages.



                So in short, many frontend instances behind a load balancer using a messaging queue to sync between themselves and to talk to backend instances which has access to databases and integrations.






                share|improve this answer


























                  0












                  0








                  0







                  Not sure about a recommended architecture, but i'll give my thoughts on it since i've been battling with similar architectural decisions.



                  On the frontend, let's say you're handling 300k users, assuming a single server can handle 5k socket connections, you'll have 60 servers sitting behind a load balancer. Each of those 60 servers will roughly have 5k socket connections open, if the user refreshes his browser, he'll get a new socket connection to any of the 60 servers.



                  Each of these 60 servers are connected to a Kafka cluster



                  Upon connecting to any of these 60 servers, you would return some kind of an identification token, a GUID or something (309245f8-05bd-4221-864f-1a0c42b82133
                  ), then that server would broadcast via Kafka to all other 59 servers that GUID 309245f8-05bd-4221-864f-1a0c42b82133 is connected to itself and each of those 59 servers would update their internal registry to take note that 309245f8-05bd-4221-864f-1a0c42b82133 belongs to Server1.



                  You need to decide what happens when a user refresh, does he lose existing messages or do you want to retain those messages?
                  If the user should continue receiving message after refreshing even though the user is now connected to a new server, the browser needs to store that GUID in a Cookie or something, upon connecting to the new server, that new server will broadcast to all other 59 servers that 309245f8-05bd-4221-864f-1a0c42b82133 now belongs to Server2 and Server1 will update itself to take note of it.



                  Storing the GUID in the frontend, you need to take security in account, if somebody hijacks that GUID, they can intercept your requests, so be sure to make Cookies HTTP Only, Secure and setup the relevant CORS settings.



                  Your backend will be servers listening to messages from Kafka, you can have as many services as you want in this fashion, if one server struggles to keep up, simply spin up more instances, from 1 instance to 2 instances, your processing capacity doubles (as an example). Each of these backend instances will keep track of the same registry the frontend has, only, instead of tracking which socket is connected to which frontend instance via GUID, the backend will track which frontend instance handles which GUID.



                  Upon receiving a message via the socket, Server2 will publish a message via Kafka where any number of backend instances can pick up the message and process it. Included with that message is the GUID, so if a response needs to come back, the backend will simply send back a message marked with that GUID and the correct frontend server will pick it up and send a response message via the socket back to the browser client.



                  If any of the 60 frontend instances goes offline, the websocket should reconnect to any of the remaining instances, the backend should be notified that those 5k GUIDs have moved to other servers. In the event that messages reach the wrong server, the frontend instances should send back that message to the backend with re-routing instructions.



                  Kafka being just one of many possible solutions, you can use RabbitMQ or any other queuing system or build one yourself. The messaging queue should be highly available and autoscale as needed and should at no point lose messages.



                  So in short, many frontend instances behind a load balancer using a messaging queue to sync between themselves and to talk to backend instances which has access to databases and integrations.






                  share|improve this answer













                  Not sure about a recommended architecture, but i'll give my thoughts on it since i've been battling with similar architectural decisions.



                  On the frontend, let's say you're handling 300k users, assuming a single server can handle 5k socket connections, you'll have 60 servers sitting behind a load balancer. Each of those 60 servers will roughly have 5k socket connections open, if the user refreshes his browser, he'll get a new socket connection to any of the 60 servers.



                  Each of these 60 servers are connected to a Kafka cluster



                  Upon connecting to any of these 60 servers, you would return some kind of an identification token, a GUID or something (309245f8-05bd-4221-864f-1a0c42b82133
                  ), then that server would broadcast via Kafka to all other 59 servers that GUID 309245f8-05bd-4221-864f-1a0c42b82133 is connected to itself and each of those 59 servers would update their internal registry to take note that 309245f8-05bd-4221-864f-1a0c42b82133 belongs to Server1.



                  You need to decide what happens when a user refresh, does he lose existing messages or do you want to retain those messages?
                  If the user should continue receiving message after refreshing even though the user is now connected to a new server, the browser needs to store that GUID in a Cookie or something, upon connecting to the new server, that new server will broadcast to all other 59 servers that 309245f8-05bd-4221-864f-1a0c42b82133 now belongs to Server2 and Server1 will update itself to take note of it.



                  Storing the GUID in the frontend, you need to take security in account, if somebody hijacks that GUID, they can intercept your requests, so be sure to make Cookies HTTP Only, Secure and setup the relevant CORS settings.



                  Your backend will be servers listening to messages from Kafka, you can have as many services as you want in this fashion, if one server struggles to keep up, simply spin up more instances, from 1 instance to 2 instances, your processing capacity doubles (as an example). Each of these backend instances will keep track of the same registry the frontend has, only, instead of tracking which socket is connected to which frontend instance via GUID, the backend will track which frontend instance handles which GUID.



                  Upon receiving a message via the socket, Server2 will publish a message via Kafka where any number of backend instances can pick up the message and process it. Included with that message is the GUID, so if a response needs to come back, the backend will simply send back a message marked with that GUID and the correct frontend server will pick it up and send a response message via the socket back to the browser client.



                  If any of the 60 frontend instances goes offline, the websocket should reconnect to any of the remaining instances, the backend should be notified that those 5k GUIDs have moved to other servers. In the event that messages reach the wrong server, the frontend instances should send back that message to the backend with re-routing instructions.



                  Kafka being just one of many possible solutions, you can use RabbitMQ or any other queuing system or build one yourself. The messaging queue should be highly available and autoscale as needed and should at no point lose messages.



                  So in short, many frontend instances behind a load balancer using a messaging queue to sync between themselves and to talk to backend instances which has access to databases and integrations.







                  share|improve this answer












                  share|improve this answer



                  share|improve this answer










                  answered Aug 3 '18 at 14:36









                  Jan Vladimir MostertJan Vladimir Mostert

                  5,59085090




                  5,59085090

























                      0














                      Just thinking about things in the same realm.
                      Actually I think your thoughts are quite sophisticated and currently I don't see a real Problem.



                      So maybe for synchronisation or clarification in case I misinterpreted.
                      My approach would look like:





                      Client - Websocket -> FDS Loadbalancer -> Feature Dispatcher Service (FDS)



                      FDS - Websocket -> FS1 Loadbalancer -> Feature Service 1 (FS 1)



                      FDS - Websocket -> FS2 Loadbalancer -> Feature Service 2 (FS 2)





                      -> Client speaks to a frontface loadbalancer. So you might spin up many FDSs.



                      -> Each client then has one persistent connection to exactly 1 FDS.



                      -> For each client the corresponding FDS would have one persistent connection for every subsequent FS.



                      -> Also those final FSs are reached through a frontface loadbalancer.
                      So you also might spin up many of those.





                      Currently I think this is a good solution which is scalable and keeps every part quite simple.



                      The only thing which is not that simple is how the loadbalancer makes the balancing decisions for the FSs. (Quantity of connections vs. busyness of FS)






                      share|improve this answer






























                        0














                        Just thinking about things in the same realm.
                        Actually I think your thoughts are quite sophisticated and currently I don't see a real Problem.



                        So maybe for synchronisation or clarification in case I misinterpreted.
                        My approach would look like:





                        Client - Websocket -> FDS Loadbalancer -> Feature Dispatcher Service (FDS)



                        FDS - Websocket -> FS1 Loadbalancer -> Feature Service 1 (FS 1)



                        FDS - Websocket -> FS2 Loadbalancer -> Feature Service 2 (FS 2)





                        -> Client speaks to a frontface loadbalancer. So you might spin up many FDSs.



                        -> Each client then has one persistent connection to exactly 1 FDS.



                        -> For each client the corresponding FDS would have one persistent connection for every subsequent FS.



                        -> Also those final FSs are reached through a frontface loadbalancer.
                        So you also might spin up many of those.





                        Currently I think this is a good solution which is scalable and keeps every part quite simple.



                        The only thing which is not that simple is how the loadbalancer makes the balancing decisions for the FSs. (Quantity of connections vs. busyness of FS)






                        share|improve this answer




























                          0












                          0








                          0







                          Just thinking about things in the same realm.
                          Actually I think your thoughts are quite sophisticated and currently I don't see a real Problem.



                          So maybe for synchronisation or clarification in case I misinterpreted.
                          My approach would look like:





                          Client - Websocket -> FDS Loadbalancer -> Feature Dispatcher Service (FDS)



                          FDS - Websocket -> FS1 Loadbalancer -> Feature Service 1 (FS 1)



                          FDS - Websocket -> FS2 Loadbalancer -> Feature Service 2 (FS 2)





                          -> Client speaks to a frontface loadbalancer. So you might spin up many FDSs.



                          -> Each client then has one persistent connection to exactly 1 FDS.



                          -> For each client the corresponding FDS would have one persistent connection for every subsequent FS.



                          -> Also those final FSs are reached through a frontface loadbalancer.
                          So you also might spin up many of those.





                          Currently I think this is a good solution which is scalable and keeps every part quite simple.



                          The only thing which is not that simple is how the loadbalancer makes the balancing decisions for the FSs. (Quantity of connections vs. busyness of FS)






                          share|improve this answer















                          Just thinking about things in the same realm.
                          Actually I think your thoughts are quite sophisticated and currently I don't see a real Problem.



                          So maybe for synchronisation or clarification in case I misinterpreted.
                          My approach would look like:





                          Client - Websocket -> FDS Loadbalancer -> Feature Dispatcher Service (FDS)



                          FDS - Websocket -> FS1 Loadbalancer -> Feature Service 1 (FS 1)



                          FDS - Websocket -> FS2 Loadbalancer -> Feature Service 2 (FS 2)





                          -> Client speaks to a frontface loadbalancer. So you might spin up many FDSs.



                          -> Each client then has one persistent connection to exactly 1 FDS.



                          -> For each client the corresponding FDS would have one persistent connection for every subsequent FS.



                          -> Also those final FSs are reached through a frontface loadbalancer.
                          So you also might spin up many of those.





                          Currently I think this is a good solution which is scalable and keeps every part quite simple.



                          The only thing which is not that simple is how the loadbalancer makes the balancing decisions for the FSs. (Quantity of connections vs. busyness of FS)







                          share|improve this answer














                          share|improve this answer



                          share|improve this answer








                          edited Nov 14 '18 at 7:08

























                          answered Nov 14 '18 at 6:28









                          LennyLenny

                          546




                          546






























                              draft saved

                              draft discarded




















































                              Thanks for contributing an answer to Stack Overflow!


                              • Please be sure to answer the question. Provide details and share your research!

                              But avoid



                              • Asking for help, clarification, or responding to other answers.

                              • Making statements based on opinion; back them up with references or personal experience.


                              To learn more, see our tips on writing great answers.




                              draft saved


                              draft discarded














                              StackExchange.ready(
                              function () {
                              StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f51673608%2fmicroservice-architecture-with-one-websocket-connection-with-every-browser%23new-answer', 'question_page');
                              }
                              );

                              Post as a guest















                              Required, but never shown





















































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown

































                              Required, but never shown














                              Required, but never shown












                              Required, but never shown







                              Required, but never shown







                              Popular posts from this blog

                              Bressuire

                              Vorschmack

                              Quarantine