How to automatically download files from github without copying the repository





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







2















I have a number of scripts that I use almost everyday in my work. I develop and maintain these on my personal laptop. I have a local git repository where I track the changes, and I have a repository on github to which I push my changes.



I do a lot of my work on a remote supercomputer, and I use my scripts there a lot. I would like to keep my remote /home/bin updated with my maintained scripts, but without cluttering the system with my repository.



My current solution does not feel ideal. I have added the following code belowto my .bashrc. Whenever I log in, my repository will be deleted, and I then clone my project from github. Then I copy the script files I want to my bin, and make them executable.



This sort of works, but it does not feel like an elegant solution. I would like to simply download the script files directly, without bothering with the git repository. I never edit my script files from the remote computer anyway, so I just want to get the files from github.



I was thinking that perhaps wget could work, but it did not feel very robust to include the urls to the raw file page at github; if I rename the file I suppose I have to update the code as well. At least my current solution is robust (as long as the github link does not change).



Code in my .bashrc:



REPDIR=mydir
if [ -d $REPDIR ]; then
rm -rf $REPDIR
echo "Old repository removed."
fi
cd $HOME
git clone https://github.com/user/myproject
cp $REPDIR/*.py $REPDIR/*.sh /home/user/bin/
chmod +x /home/user/bin/*


Based on Kent's solution, I have defined a function that updates my scripts. To avoid any issues with symlinks, I just unlink everything and relink. that might just be my paranoia, though....



function updatescripts() {
DIR=/home/user/scripts
CURR_DIR=$PWD
cd $DIR
git pull origin master
cd $CURR_DIR
for file in $DIR/*.py $DIR/*.sh; do
if [ -L $HOME/bin/$(basename $file) ]; then
unlink $HOME/bin/$(basename $file)
fi
ln -s $file $HOME/bin/$(basename $file)
done
}









share|improve this question































    2















    I have a number of scripts that I use almost everyday in my work. I develop and maintain these on my personal laptop. I have a local git repository where I track the changes, and I have a repository on github to which I push my changes.



    I do a lot of my work on a remote supercomputer, and I use my scripts there a lot. I would like to keep my remote /home/bin updated with my maintained scripts, but without cluttering the system with my repository.



    My current solution does not feel ideal. I have added the following code belowto my .bashrc. Whenever I log in, my repository will be deleted, and I then clone my project from github. Then I copy the script files I want to my bin, and make them executable.



    This sort of works, but it does not feel like an elegant solution. I would like to simply download the script files directly, without bothering with the git repository. I never edit my script files from the remote computer anyway, so I just want to get the files from github.



    I was thinking that perhaps wget could work, but it did not feel very robust to include the urls to the raw file page at github; if I rename the file I suppose I have to update the code as well. At least my current solution is robust (as long as the github link does not change).



    Code in my .bashrc:



    REPDIR=mydir
    if [ -d $REPDIR ]; then
    rm -rf $REPDIR
    echo "Old repository removed."
    fi
    cd $HOME
    git clone https://github.com/user/myproject
    cp $REPDIR/*.py $REPDIR/*.sh /home/user/bin/
    chmod +x /home/user/bin/*


    Based on Kent's solution, I have defined a function that updates my scripts. To avoid any issues with symlinks, I just unlink everything and relink. that might just be my paranoia, though....



    function updatescripts() {
    DIR=/home/user/scripts
    CURR_DIR=$PWD
    cd $DIR
    git pull origin master
    cd $CURR_DIR
    for file in $DIR/*.py $DIR/*.sh; do
    if [ -L $HOME/bin/$(basename $file) ]; then
    unlink $HOME/bin/$(basename $file)
    fi
    ln -s $file $HOME/bin/$(basename $file)
    done
    }









    share|improve this question



























      2












      2








      2








      I have a number of scripts that I use almost everyday in my work. I develop and maintain these on my personal laptop. I have a local git repository where I track the changes, and I have a repository on github to which I push my changes.



      I do a lot of my work on a remote supercomputer, and I use my scripts there a lot. I would like to keep my remote /home/bin updated with my maintained scripts, but without cluttering the system with my repository.



      My current solution does not feel ideal. I have added the following code belowto my .bashrc. Whenever I log in, my repository will be deleted, and I then clone my project from github. Then I copy the script files I want to my bin, and make them executable.



      This sort of works, but it does not feel like an elegant solution. I would like to simply download the script files directly, without bothering with the git repository. I never edit my script files from the remote computer anyway, so I just want to get the files from github.



      I was thinking that perhaps wget could work, but it did not feel very robust to include the urls to the raw file page at github; if I rename the file I suppose I have to update the code as well. At least my current solution is robust (as long as the github link does not change).



      Code in my .bashrc:



      REPDIR=mydir
      if [ -d $REPDIR ]; then
      rm -rf $REPDIR
      echo "Old repository removed."
      fi
      cd $HOME
      git clone https://github.com/user/myproject
      cp $REPDIR/*.py $REPDIR/*.sh /home/user/bin/
      chmod +x /home/user/bin/*


      Based on Kent's solution, I have defined a function that updates my scripts. To avoid any issues with symlinks, I just unlink everything and relink. that might just be my paranoia, though....



      function updatescripts() {
      DIR=/home/user/scripts
      CURR_DIR=$PWD
      cd $DIR
      git pull origin master
      cd $CURR_DIR
      for file in $DIR/*.py $DIR/*.sh; do
      if [ -L $HOME/bin/$(basename $file) ]; then
      unlink $HOME/bin/$(basename $file)
      fi
      ln -s $file $HOME/bin/$(basename $file)
      done
      }









      share|improve this question
















      I have a number of scripts that I use almost everyday in my work. I develop and maintain these on my personal laptop. I have a local git repository where I track the changes, and I have a repository on github to which I push my changes.



      I do a lot of my work on a remote supercomputer, and I use my scripts there a lot. I would like to keep my remote /home/bin updated with my maintained scripts, but without cluttering the system with my repository.



      My current solution does not feel ideal. I have added the following code belowto my .bashrc. Whenever I log in, my repository will be deleted, and I then clone my project from github. Then I copy the script files I want to my bin, and make them executable.



      This sort of works, but it does not feel like an elegant solution. I would like to simply download the script files directly, without bothering with the git repository. I never edit my script files from the remote computer anyway, so I just want to get the files from github.



      I was thinking that perhaps wget could work, but it did not feel very robust to include the urls to the raw file page at github; if I rename the file I suppose I have to update the code as well. At least my current solution is robust (as long as the github link does not change).



      Code in my .bashrc:



      REPDIR=mydir
      if [ -d $REPDIR ]; then
      rm -rf $REPDIR
      echo "Old repository removed."
      fi
      cd $HOME
      git clone https://github.com/user/myproject
      cp $REPDIR/*.py $REPDIR/*.sh /home/user/bin/
      chmod +x /home/user/bin/*


      Based on Kent's solution, I have defined a function that updates my scripts. To avoid any issues with symlinks, I just unlink everything and relink. that might just be my paranoia, though....



      function updatescripts() {
      DIR=/home/user/scripts
      CURR_DIR=$PWD
      cd $DIR
      git pull origin master
      cd $CURR_DIR
      for file in $DIR/*.py $DIR/*.sh; do
      if [ -L $HOME/bin/$(basename $file) ]; then
      unlink $HOME/bin/$(basename $file)
      fi
      ln -s $file $HOME/bin/$(basename $file)
      done
      }






      bash git github






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 16 '18 at 15:32







      Yoda

















      asked Nov 16 '18 at 12:42









      YodaYoda

      214413




      214413
























          1 Answer
          1






          active

          oldest

          votes


















          3














          on that remote machine, don't do rm then clone, keep the repository somewhere, just do pull. Since you said you will not change the files on that machine, there won't be conflicts.



          For the scripts files. Don't do cp, instead, create symbolic links (ln -s) to your target directory.






          share|improve this answer
























          • This may be a cleaner alternative to my rm/clone solution, but still it does not avoid the whole repository. But I agree it is better than my previous solution.

            – Yoda
            Nov 16 '18 at 15:33











          • @Yoda can you tell a bit why you want to avoid the whole repo? sensitive data? or it is a big repo? If it contains sensitive data, you had clone script in your bashrc, it is anyway not safe. If it is big one, the pull is much faster than clone

            – Kent
            Nov 16 '18 at 20:28













          • One of the first things I did after the new solution was to edit one of the files.... and so it messed up the repository a bit. If I bypassed the repository, I feel it would easier to correct something if I edited the remote files. I would just re-download and overwrite them. I dont have sensitive data or anything like that, it was just because of my lack of self discipline to not edit the remote files.

            – Yoda
            Nov 16 '18 at 22:03












          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53338158%2fhow-to-automatically-download-files-from-github-without-copying-the-repository%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          3














          on that remote machine, don't do rm then clone, keep the repository somewhere, just do pull. Since you said you will not change the files on that machine, there won't be conflicts.



          For the scripts files. Don't do cp, instead, create symbolic links (ln -s) to your target directory.






          share|improve this answer
























          • This may be a cleaner alternative to my rm/clone solution, but still it does not avoid the whole repository. But I agree it is better than my previous solution.

            – Yoda
            Nov 16 '18 at 15:33











          • @Yoda can you tell a bit why you want to avoid the whole repo? sensitive data? or it is a big repo? If it contains sensitive data, you had clone script in your bashrc, it is anyway not safe. If it is big one, the pull is much faster than clone

            – Kent
            Nov 16 '18 at 20:28













          • One of the first things I did after the new solution was to edit one of the files.... and so it messed up the repository a bit. If I bypassed the repository, I feel it would easier to correct something if I edited the remote files. I would just re-download and overwrite them. I dont have sensitive data or anything like that, it was just because of my lack of self discipline to not edit the remote files.

            – Yoda
            Nov 16 '18 at 22:03
















          3














          on that remote machine, don't do rm then clone, keep the repository somewhere, just do pull. Since you said you will not change the files on that machine, there won't be conflicts.



          For the scripts files. Don't do cp, instead, create symbolic links (ln -s) to your target directory.






          share|improve this answer
























          • This may be a cleaner alternative to my rm/clone solution, but still it does not avoid the whole repository. But I agree it is better than my previous solution.

            – Yoda
            Nov 16 '18 at 15:33











          • @Yoda can you tell a bit why you want to avoid the whole repo? sensitive data? or it is a big repo? If it contains sensitive data, you had clone script in your bashrc, it is anyway not safe. If it is big one, the pull is much faster than clone

            – Kent
            Nov 16 '18 at 20:28













          • One of the first things I did after the new solution was to edit one of the files.... and so it messed up the repository a bit. If I bypassed the repository, I feel it would easier to correct something if I edited the remote files. I would just re-download and overwrite them. I dont have sensitive data or anything like that, it was just because of my lack of self discipline to not edit the remote files.

            – Yoda
            Nov 16 '18 at 22:03














          3












          3








          3







          on that remote machine, don't do rm then clone, keep the repository somewhere, just do pull. Since you said you will not change the files on that machine, there won't be conflicts.



          For the scripts files. Don't do cp, instead, create symbolic links (ln -s) to your target directory.






          share|improve this answer













          on that remote machine, don't do rm then clone, keep the repository somewhere, just do pull. Since you said you will not change the files on that machine, there won't be conflicts.



          For the scripts files. Don't do cp, instead, create symbolic links (ln -s) to your target directory.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 16 '18 at 12:56









          KentKent

          147k28161221




          147k28161221













          • This may be a cleaner alternative to my rm/clone solution, but still it does not avoid the whole repository. But I agree it is better than my previous solution.

            – Yoda
            Nov 16 '18 at 15:33











          • @Yoda can you tell a bit why you want to avoid the whole repo? sensitive data? or it is a big repo? If it contains sensitive data, you had clone script in your bashrc, it is anyway not safe. If it is big one, the pull is much faster than clone

            – Kent
            Nov 16 '18 at 20:28













          • One of the first things I did after the new solution was to edit one of the files.... and so it messed up the repository a bit. If I bypassed the repository, I feel it would easier to correct something if I edited the remote files. I would just re-download and overwrite them. I dont have sensitive data or anything like that, it was just because of my lack of self discipline to not edit the remote files.

            – Yoda
            Nov 16 '18 at 22:03



















          • This may be a cleaner alternative to my rm/clone solution, but still it does not avoid the whole repository. But I agree it is better than my previous solution.

            – Yoda
            Nov 16 '18 at 15:33











          • @Yoda can you tell a bit why you want to avoid the whole repo? sensitive data? or it is a big repo? If it contains sensitive data, you had clone script in your bashrc, it is anyway not safe. If it is big one, the pull is much faster than clone

            – Kent
            Nov 16 '18 at 20:28













          • One of the first things I did after the new solution was to edit one of the files.... and so it messed up the repository a bit. If I bypassed the repository, I feel it would easier to correct something if I edited the remote files. I would just re-download and overwrite them. I dont have sensitive data or anything like that, it was just because of my lack of self discipline to not edit the remote files.

            – Yoda
            Nov 16 '18 at 22:03

















          This may be a cleaner alternative to my rm/clone solution, but still it does not avoid the whole repository. But I agree it is better than my previous solution.

          – Yoda
          Nov 16 '18 at 15:33





          This may be a cleaner alternative to my rm/clone solution, but still it does not avoid the whole repository. But I agree it is better than my previous solution.

          – Yoda
          Nov 16 '18 at 15:33













          @Yoda can you tell a bit why you want to avoid the whole repo? sensitive data? or it is a big repo? If it contains sensitive data, you had clone script in your bashrc, it is anyway not safe. If it is big one, the pull is much faster than clone

          – Kent
          Nov 16 '18 at 20:28







          @Yoda can you tell a bit why you want to avoid the whole repo? sensitive data? or it is a big repo? If it contains sensitive data, you had clone script in your bashrc, it is anyway not safe. If it is big one, the pull is much faster than clone

          – Kent
          Nov 16 '18 at 20:28















          One of the first things I did after the new solution was to edit one of the files.... and so it messed up the repository a bit. If I bypassed the repository, I feel it would easier to correct something if I edited the remote files. I would just re-download and overwrite them. I dont have sensitive data or anything like that, it was just because of my lack of self discipline to not edit the remote files.

          – Yoda
          Nov 16 '18 at 22:03





          One of the first things I did after the new solution was to edit one of the files.... and so it messed up the repository a bit. If I bypassed the repository, I feel it would easier to correct something if I edited the remote files. I would just re-download and overwrite them. I dont have sensitive data or anything like that, it was just because of my lack of self discipline to not edit the remote files.

          – Yoda
          Nov 16 '18 at 22:03




















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53338158%2fhow-to-automatically-download-files-from-github-without-copying-the-repository%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Bressuire

          Vorschmack

          Quarantine