Is there a heuristic algorithm for groupBy + count?












1















I got a List of integers and I want to count the number of times each integer appears in the list.



For example: [0,5,0,1,3,3,1,1,1] gives (0 -> 2), (1 -> 4), (3 -> 2), (5 -> 1). I only need the count, not the value (the goal is to have an histogram of the counts).



A common approach would be to group by value then count the cardinality of each set. In SQL: SELECT count(*) FROM myTable GROUPBY theColumnContainingIntegers.



Is there a faster way to do this? A heuristic or a probabilistic approach is fine since I am computing a large data set and sacrifying precision for speed is fine.



Something similar to HyperLogLog algorithm (used to count the number of distinct elements in a data set) would be great, but I did not find anything like this...










share|improve this question





























    1















    I got a List of integers and I want to count the number of times each integer appears in the list.



    For example: [0,5,0,1,3,3,1,1,1] gives (0 -> 2), (1 -> 4), (3 -> 2), (5 -> 1). I only need the count, not the value (the goal is to have an histogram of the counts).



    A common approach would be to group by value then count the cardinality of each set. In SQL: SELECT count(*) FROM myTable GROUPBY theColumnContainingIntegers.



    Is there a faster way to do this? A heuristic or a probabilistic approach is fine since I am computing a large data set and sacrifying precision for speed is fine.



    Something similar to HyperLogLog algorithm (used to count the number of distinct elements in a data set) would be great, but I did not find anything like this...










    share|improve this question



























      1












      1








      1


      2






      I got a List of integers and I want to count the number of times each integer appears in the list.



      For example: [0,5,0,1,3,3,1,1,1] gives (0 -> 2), (1 -> 4), (3 -> 2), (5 -> 1). I only need the count, not the value (the goal is to have an histogram of the counts).



      A common approach would be to group by value then count the cardinality of each set. In SQL: SELECT count(*) FROM myTable GROUPBY theColumnContainingIntegers.



      Is there a faster way to do this? A heuristic or a probabilistic approach is fine since I am computing a large data set and sacrifying precision for speed is fine.



      Something similar to HyperLogLog algorithm (used to count the number of distinct elements in a data set) would be great, but I did not find anything like this...










      share|improve this question
















      I got a List of integers and I want to count the number of times each integer appears in the list.



      For example: [0,5,0,1,3,3,1,1,1] gives (0 -> 2), (1 -> 4), (3 -> 2), (5 -> 1). I only need the count, not the value (the goal is to have an histogram of the counts).



      A common approach would be to group by value then count the cardinality of each set. In SQL: SELECT count(*) FROM myTable GROUPBY theColumnContainingIntegers.



      Is there a faster way to do this? A heuristic or a probabilistic approach is fine since I am computing a large data set and sacrifying precision for speed is fine.



      Something similar to HyperLogLog algorithm (used to count the number of distinct elements in a data set) would be great, but I did not find anything like this...







      algorithm group-by language-agnostic






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 21 '18 at 15:17







      BMerliot

















      asked Nov 14 '18 at 16:46









      BMerliotBMerliot

      172111




      172111
























          1 Answer
          1






          active

          oldest

          votes


















          1














          Let's take your set containing 9 elements [0,5,0,1,3,3,1,1,1] and make it big but with same frequencies of elements:



          > bigarray = [0,5,0,1,3,3,1,1,1] * 200
          => [0, 5, 0, 1, 3, 3, 1, 1, 1, 0, 5, 0, 1, 3, 3, 1, ...


          Now bigarray size is 1800 so let's try to work with it.



          Take a sample of 180 elements (random 180 elements from this set)



          Now compute occurence for this random subset



          {5=>19, 3=>45, 1=>76, 0=>40}


          Normalized:



          {5=>1.0, 3=>2.3684210526315788, 1=>4.0, 0=>2.1052631578947367}


          Of course for different random subset results will be different:



          {5=>21, 3=>38, 1=>86, 0=>35}


          Normalized



          {5=>1.0, 3=>1.8095238095238095, 1=>4.095238095238095, 0=>1.6666666666666667}


          Of course there are some errors there - this is inevitable and you will need to state what will be acceptable error



          Now make same test for bigarray (size 1000) with 50% of 0's and 50% of 1's



           > bigarray = [0,1] * 500
          => [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, ...


          With sample of 100 elements:



          {0=>50, 1=>50}


          Normalized



          {0=>1.0, 1=>1.0}


          Second sample:



          {0=>49, 1=>51}


          Normalized



          {0=>1.0, 1=>1.0408163265306123}


          It seems that we can easily reduce our subset and here Sampling comes.



          Especially Reservoir Sampling - this may be very useful if in your case data is populated 'live' or set is too large to process all values at once.



          edit



          Concerning comment:
          Of course if you have large set and some element appears there very rare then you may have lost it and occurence will equal 0.



          Then you may use kind of smoothing function (check additive smoothing). Just assume that each possible element 1 more time than it really appeared.



          For example let's say we have set:



          1000 elements 1
          100 elements 2
          10 elements 3
          1 elements 4


          Let's say our subset contains {1=>100,2=>10,3=>1, 4=>0}



          Smoothing param = 0.05 so we add 0.05 to each occurence



          {1=>100.05,2=>10.05,3=>1.05, 4=>0.05}



          Of course this is assuming that you know what values are even possible to be present in the set.






          share|improve this answer


























          • I already tried sampling but this only works when every distinct element appears a sufficient amount of times in the array. As you sample an array of n element by only computing on n*p elements (where p is a proportion between 0 and 1), you get a probability of approximately (1-p)**k not to detect an element that appears k times (if n>>k).

            – BMerliot
            Dec 2 '18 at 11:38











          • If you try have an array of at least millions of elements (which is the case if you need sampling), you won't detect a part of these elements. This can be problematic since you lose then a non negligible part of your elements diversity.

            – BMerliot
            Dec 2 '18 at 11:40











          • I tried to couple sampling to estimate recurrent elements, and hyperLogLog to estimate the distinct elements cardinality (which gave me a rought estimate of how many distinct elements I did not detect). The problem is that an undetected element can appear 1 time or 100 times or even more if you are unlucky.

            – BMerliot
            Dec 2 '18 at 11:43











          • Thus sampling is only accurate enough unless it is close to 100%, which is kind of useless then.

            – BMerliot
            Dec 2 '18 at 11:45






          • 1





            Updated my answer, consider using smoothing

            – dfens
            Dec 2 '18 at 12:15











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53305059%2fis-there-a-heuristic-algorithm-for-groupby-count%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          Let's take your set containing 9 elements [0,5,0,1,3,3,1,1,1] and make it big but with same frequencies of elements:



          > bigarray = [0,5,0,1,3,3,1,1,1] * 200
          => [0, 5, 0, 1, 3, 3, 1, 1, 1, 0, 5, 0, 1, 3, 3, 1, ...


          Now bigarray size is 1800 so let's try to work with it.



          Take a sample of 180 elements (random 180 elements from this set)



          Now compute occurence for this random subset



          {5=>19, 3=>45, 1=>76, 0=>40}


          Normalized:



          {5=>1.0, 3=>2.3684210526315788, 1=>4.0, 0=>2.1052631578947367}


          Of course for different random subset results will be different:



          {5=>21, 3=>38, 1=>86, 0=>35}


          Normalized



          {5=>1.0, 3=>1.8095238095238095, 1=>4.095238095238095, 0=>1.6666666666666667}


          Of course there are some errors there - this is inevitable and you will need to state what will be acceptable error



          Now make same test for bigarray (size 1000) with 50% of 0's and 50% of 1's



           > bigarray = [0,1] * 500
          => [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, ...


          With sample of 100 elements:



          {0=>50, 1=>50}


          Normalized



          {0=>1.0, 1=>1.0}


          Second sample:



          {0=>49, 1=>51}


          Normalized



          {0=>1.0, 1=>1.0408163265306123}


          It seems that we can easily reduce our subset and here Sampling comes.



          Especially Reservoir Sampling - this may be very useful if in your case data is populated 'live' or set is too large to process all values at once.



          edit



          Concerning comment:
          Of course if you have large set and some element appears there very rare then you may have lost it and occurence will equal 0.



          Then you may use kind of smoothing function (check additive smoothing). Just assume that each possible element 1 more time than it really appeared.



          For example let's say we have set:



          1000 elements 1
          100 elements 2
          10 elements 3
          1 elements 4


          Let's say our subset contains {1=>100,2=>10,3=>1, 4=>0}



          Smoothing param = 0.05 so we add 0.05 to each occurence



          {1=>100.05,2=>10.05,3=>1.05, 4=>0.05}



          Of course this is assuming that you know what values are even possible to be present in the set.






          share|improve this answer


























          • I already tried sampling but this only works when every distinct element appears a sufficient amount of times in the array. As you sample an array of n element by only computing on n*p elements (where p is a proportion between 0 and 1), you get a probability of approximately (1-p)**k not to detect an element that appears k times (if n>>k).

            – BMerliot
            Dec 2 '18 at 11:38











          • If you try have an array of at least millions of elements (which is the case if you need sampling), you won't detect a part of these elements. This can be problematic since you lose then a non negligible part of your elements diversity.

            – BMerliot
            Dec 2 '18 at 11:40











          • I tried to couple sampling to estimate recurrent elements, and hyperLogLog to estimate the distinct elements cardinality (which gave me a rought estimate of how many distinct elements I did not detect). The problem is that an undetected element can appear 1 time or 100 times or even more if you are unlucky.

            – BMerliot
            Dec 2 '18 at 11:43











          • Thus sampling is only accurate enough unless it is close to 100%, which is kind of useless then.

            – BMerliot
            Dec 2 '18 at 11:45






          • 1





            Updated my answer, consider using smoothing

            – dfens
            Dec 2 '18 at 12:15
















          1














          Let's take your set containing 9 elements [0,5,0,1,3,3,1,1,1] and make it big but with same frequencies of elements:



          > bigarray = [0,5,0,1,3,3,1,1,1] * 200
          => [0, 5, 0, 1, 3, 3, 1, 1, 1, 0, 5, 0, 1, 3, 3, 1, ...


          Now bigarray size is 1800 so let's try to work with it.



          Take a sample of 180 elements (random 180 elements from this set)



          Now compute occurence for this random subset



          {5=>19, 3=>45, 1=>76, 0=>40}


          Normalized:



          {5=>1.0, 3=>2.3684210526315788, 1=>4.0, 0=>2.1052631578947367}


          Of course for different random subset results will be different:



          {5=>21, 3=>38, 1=>86, 0=>35}


          Normalized



          {5=>1.0, 3=>1.8095238095238095, 1=>4.095238095238095, 0=>1.6666666666666667}


          Of course there are some errors there - this is inevitable and you will need to state what will be acceptable error



          Now make same test for bigarray (size 1000) with 50% of 0's and 50% of 1's



           > bigarray = [0,1] * 500
          => [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, ...


          With sample of 100 elements:



          {0=>50, 1=>50}


          Normalized



          {0=>1.0, 1=>1.0}


          Second sample:



          {0=>49, 1=>51}


          Normalized



          {0=>1.0, 1=>1.0408163265306123}


          It seems that we can easily reduce our subset and here Sampling comes.



          Especially Reservoir Sampling - this may be very useful if in your case data is populated 'live' or set is too large to process all values at once.



          edit



          Concerning comment:
          Of course if you have large set and some element appears there very rare then you may have lost it and occurence will equal 0.



          Then you may use kind of smoothing function (check additive smoothing). Just assume that each possible element 1 more time than it really appeared.



          For example let's say we have set:



          1000 elements 1
          100 elements 2
          10 elements 3
          1 elements 4


          Let's say our subset contains {1=>100,2=>10,3=>1, 4=>0}



          Smoothing param = 0.05 so we add 0.05 to each occurence



          {1=>100.05,2=>10.05,3=>1.05, 4=>0.05}



          Of course this is assuming that you know what values are even possible to be present in the set.






          share|improve this answer


























          • I already tried sampling but this only works when every distinct element appears a sufficient amount of times in the array. As you sample an array of n element by only computing on n*p elements (where p is a proportion between 0 and 1), you get a probability of approximately (1-p)**k not to detect an element that appears k times (if n>>k).

            – BMerliot
            Dec 2 '18 at 11:38











          • If you try have an array of at least millions of elements (which is the case if you need sampling), you won't detect a part of these elements. This can be problematic since you lose then a non negligible part of your elements diversity.

            – BMerliot
            Dec 2 '18 at 11:40











          • I tried to couple sampling to estimate recurrent elements, and hyperLogLog to estimate the distinct elements cardinality (which gave me a rought estimate of how many distinct elements I did not detect). The problem is that an undetected element can appear 1 time or 100 times or even more if you are unlucky.

            – BMerliot
            Dec 2 '18 at 11:43











          • Thus sampling is only accurate enough unless it is close to 100%, which is kind of useless then.

            – BMerliot
            Dec 2 '18 at 11:45






          • 1





            Updated my answer, consider using smoothing

            – dfens
            Dec 2 '18 at 12:15














          1












          1








          1







          Let's take your set containing 9 elements [0,5,0,1,3,3,1,1,1] and make it big but with same frequencies of elements:



          > bigarray = [0,5,0,1,3,3,1,1,1] * 200
          => [0, 5, 0, 1, 3, 3, 1, 1, 1, 0, 5, 0, 1, 3, 3, 1, ...


          Now bigarray size is 1800 so let's try to work with it.



          Take a sample of 180 elements (random 180 elements from this set)



          Now compute occurence for this random subset



          {5=>19, 3=>45, 1=>76, 0=>40}


          Normalized:



          {5=>1.0, 3=>2.3684210526315788, 1=>4.0, 0=>2.1052631578947367}


          Of course for different random subset results will be different:



          {5=>21, 3=>38, 1=>86, 0=>35}


          Normalized



          {5=>1.0, 3=>1.8095238095238095, 1=>4.095238095238095, 0=>1.6666666666666667}


          Of course there are some errors there - this is inevitable and you will need to state what will be acceptable error



          Now make same test for bigarray (size 1000) with 50% of 0's and 50% of 1's



           > bigarray = [0,1] * 500
          => [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, ...


          With sample of 100 elements:



          {0=>50, 1=>50}


          Normalized



          {0=>1.0, 1=>1.0}


          Second sample:



          {0=>49, 1=>51}


          Normalized



          {0=>1.0, 1=>1.0408163265306123}


          It seems that we can easily reduce our subset and here Sampling comes.



          Especially Reservoir Sampling - this may be very useful if in your case data is populated 'live' or set is too large to process all values at once.



          edit



          Concerning comment:
          Of course if you have large set and some element appears there very rare then you may have lost it and occurence will equal 0.



          Then you may use kind of smoothing function (check additive smoothing). Just assume that each possible element 1 more time than it really appeared.



          For example let's say we have set:



          1000 elements 1
          100 elements 2
          10 elements 3
          1 elements 4


          Let's say our subset contains {1=>100,2=>10,3=>1, 4=>0}



          Smoothing param = 0.05 so we add 0.05 to each occurence



          {1=>100.05,2=>10.05,3=>1.05, 4=>0.05}



          Of course this is assuming that you know what values are even possible to be present in the set.






          share|improve this answer















          Let's take your set containing 9 elements [0,5,0,1,3,3,1,1,1] and make it big but with same frequencies of elements:



          > bigarray = [0,5,0,1,3,3,1,1,1] * 200
          => [0, 5, 0, 1, 3, 3, 1, 1, 1, 0, 5, 0, 1, 3, 3, 1, ...


          Now bigarray size is 1800 so let's try to work with it.



          Take a sample of 180 elements (random 180 elements from this set)



          Now compute occurence for this random subset



          {5=>19, 3=>45, 1=>76, 0=>40}


          Normalized:



          {5=>1.0, 3=>2.3684210526315788, 1=>4.0, 0=>2.1052631578947367}


          Of course for different random subset results will be different:



          {5=>21, 3=>38, 1=>86, 0=>35}


          Normalized



          {5=>1.0, 3=>1.8095238095238095, 1=>4.095238095238095, 0=>1.6666666666666667}


          Of course there are some errors there - this is inevitable and you will need to state what will be acceptable error



          Now make same test for bigarray (size 1000) with 50% of 0's and 50% of 1's



           > bigarray = [0,1] * 500
          => [0, 1, 0, 1, 0, 1, 0, 1, 0, 1, 0, ...


          With sample of 100 elements:



          {0=>50, 1=>50}


          Normalized



          {0=>1.0, 1=>1.0}


          Second sample:



          {0=>49, 1=>51}


          Normalized



          {0=>1.0, 1=>1.0408163265306123}


          It seems that we can easily reduce our subset and here Sampling comes.



          Especially Reservoir Sampling - this may be very useful if in your case data is populated 'live' or set is too large to process all values at once.



          edit



          Concerning comment:
          Of course if you have large set and some element appears there very rare then you may have lost it and occurence will equal 0.



          Then you may use kind of smoothing function (check additive smoothing). Just assume that each possible element 1 more time than it really appeared.



          For example let's say we have set:



          1000 elements 1
          100 elements 2
          10 elements 3
          1 elements 4


          Let's say our subset contains {1=>100,2=>10,3=>1, 4=>0}



          Smoothing param = 0.05 so we add 0.05 to each occurence



          {1=>100.05,2=>10.05,3=>1.05, 4=>0.05}



          Of course this is assuming that you know what values are even possible to be present in the set.







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Dec 2 '18 at 12:14

























          answered Dec 2 '18 at 9:48









          dfensdfens

          3,47942447




          3,47942447













          • I already tried sampling but this only works when every distinct element appears a sufficient amount of times in the array. As you sample an array of n element by only computing on n*p elements (where p is a proportion between 0 and 1), you get a probability of approximately (1-p)**k not to detect an element that appears k times (if n>>k).

            – BMerliot
            Dec 2 '18 at 11:38











          • If you try have an array of at least millions of elements (which is the case if you need sampling), you won't detect a part of these elements. This can be problematic since you lose then a non negligible part of your elements diversity.

            – BMerliot
            Dec 2 '18 at 11:40











          • I tried to couple sampling to estimate recurrent elements, and hyperLogLog to estimate the distinct elements cardinality (which gave me a rought estimate of how many distinct elements I did not detect). The problem is that an undetected element can appear 1 time or 100 times or even more if you are unlucky.

            – BMerliot
            Dec 2 '18 at 11:43











          • Thus sampling is only accurate enough unless it is close to 100%, which is kind of useless then.

            – BMerliot
            Dec 2 '18 at 11:45






          • 1





            Updated my answer, consider using smoothing

            – dfens
            Dec 2 '18 at 12:15



















          • I already tried sampling but this only works when every distinct element appears a sufficient amount of times in the array. As you sample an array of n element by only computing on n*p elements (where p is a proportion between 0 and 1), you get a probability of approximately (1-p)**k not to detect an element that appears k times (if n>>k).

            – BMerliot
            Dec 2 '18 at 11:38











          • If you try have an array of at least millions of elements (which is the case if you need sampling), you won't detect a part of these elements. This can be problematic since you lose then a non negligible part of your elements diversity.

            – BMerliot
            Dec 2 '18 at 11:40











          • I tried to couple sampling to estimate recurrent elements, and hyperLogLog to estimate the distinct elements cardinality (which gave me a rought estimate of how many distinct elements I did not detect). The problem is that an undetected element can appear 1 time or 100 times or even more if you are unlucky.

            – BMerliot
            Dec 2 '18 at 11:43











          • Thus sampling is only accurate enough unless it is close to 100%, which is kind of useless then.

            – BMerliot
            Dec 2 '18 at 11:45






          • 1





            Updated my answer, consider using smoothing

            – dfens
            Dec 2 '18 at 12:15

















          I already tried sampling but this only works when every distinct element appears a sufficient amount of times in the array. As you sample an array of n element by only computing on n*p elements (where p is a proportion between 0 and 1), you get a probability of approximately (1-p)**k not to detect an element that appears k times (if n>>k).

          – BMerliot
          Dec 2 '18 at 11:38





          I already tried sampling but this only works when every distinct element appears a sufficient amount of times in the array. As you sample an array of n element by only computing on n*p elements (where p is a proportion between 0 and 1), you get a probability of approximately (1-p)**k not to detect an element that appears k times (if n>>k).

          – BMerliot
          Dec 2 '18 at 11:38













          If you try have an array of at least millions of elements (which is the case if you need sampling), you won't detect a part of these elements. This can be problematic since you lose then a non negligible part of your elements diversity.

          – BMerliot
          Dec 2 '18 at 11:40





          If you try have an array of at least millions of elements (which is the case if you need sampling), you won't detect a part of these elements. This can be problematic since you lose then a non negligible part of your elements diversity.

          – BMerliot
          Dec 2 '18 at 11:40













          I tried to couple sampling to estimate recurrent elements, and hyperLogLog to estimate the distinct elements cardinality (which gave me a rought estimate of how many distinct elements I did not detect). The problem is that an undetected element can appear 1 time or 100 times or even more if you are unlucky.

          – BMerliot
          Dec 2 '18 at 11:43





          I tried to couple sampling to estimate recurrent elements, and hyperLogLog to estimate the distinct elements cardinality (which gave me a rought estimate of how many distinct elements I did not detect). The problem is that an undetected element can appear 1 time or 100 times or even more if you are unlucky.

          – BMerliot
          Dec 2 '18 at 11:43













          Thus sampling is only accurate enough unless it is close to 100%, which is kind of useless then.

          – BMerliot
          Dec 2 '18 at 11:45





          Thus sampling is only accurate enough unless it is close to 100%, which is kind of useless then.

          – BMerliot
          Dec 2 '18 at 11:45




          1




          1





          Updated my answer, consider using smoothing

          – dfens
          Dec 2 '18 at 12:15





          Updated my answer, consider using smoothing

          – dfens
          Dec 2 '18 at 12:15




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53305059%2fis-there-a-heuristic-algorithm-for-groupby-count%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Bressuire

          Vorschmack

          Quarantine