We’re gearing up to release our first program for the general public in quite some time, so we’re going to need a real website again, rather than the one-page placeholder that I put up several years ago.
The last time I had to design a website was before the turn of the century. I started with FrontPage (which we had as part of a copy of Microsoft Office from the era). It had a wonderful feature where it collaborated with some FrontPage extension on the server to update your site after any changes. I didn’t know enough to appreciate that at the time, and apparently at least some web hosts really don’t like it, because when we changed hosting companies a few years later the new one flatly refused to support it.
At that point we bought a copy of DreamWeaver. That was supposed to be the very best website editor, and it was quite expensive, but when I started using it I was REALLY disappointed. It seemed more primitive than even that aging copy of FrontPage, and required you to use the ancient and insecure FTP protocol to upload your work to your site.
Fast forward to the present.
Since GoddessJ and I are collaborating on the HTML/CSS stuff, I set up a Git repository on our internal office server (not the same machine as our web server) so we could keep our copies of the files synchronized, and the files would be automatically backed up with the rest of our source code repositories. As I finished that, it suddenly struck me that it should be possible to use Git to sync the files to the server too, right? That would have some major advantages over doing it manually or via something like DreamWeaver.
A web-search said that, if I had SSH access to the server and could run Git on it, then the answer was yes. 😀
The first problem was setting up SSH to access the web server properly. I’d already set it up for basic access, but it required manually entering a password each time; it needed to use public keys instead. (This is using Linux on the local system, where an always-available “keyring” program caches the SSH key password, which is different from a login password. Windows machines don’t have anything similar, so far as I know.)
I’d previously set up webserver
as an alias to the web server in ~/.ssh/config
, with the alternate port that our hosting company uses and some other configuration stuff it needed, so based on this page, all that was necessary was to copy the local key file (already on my machine) to the server. There was no SSH directory on the server yet, so after creating one (via mkdir ~/.ssh
from an existing SSH session), this command uploaded it from the local machine…
scp ~/.ssh/id_dsa.pub webserver:~/.ssh/authorized_keys
…and then back to the existing SSH session to set the permissions…
chmod 700 ~/.ssh
chmod 600 ~/.ssh/authorized_keys
We don’t have root access to the web server, so I didn’t try to chown
the file to root
as described. Despite that, it worked: now a simple ssh webserver
takes me directly into the server. 🙂
According to another page the search had turned up, the currently-preferred way to use Git to update a website is described here. The directions there worked like a charm: after following them, I tested it by making a slight modification to the index.html
page, committing it to the local repository, then doing a git push web
… and IT WORKED! The copy of Git on the server apparently used the post-receive hook set up in those instructions to check out the changes as soon as I uploaded them, because when I refreshed the index page in my browser, I could see the change! GoddessJ and the cats were quite startled when I let out a loud cheer. What can I say, I love it when technology works as advertised. 😉
This setup will be a LOT better than using FTP. scp
(the SSH secure-copy command) does the same things as FTP but securely, but you have to upload each changed file separately. With this, Git takes care of uploading everything (compression automatically included) and unpacking it to the proper directories on the server, and will only update the pages that have actually changed. You can also easily back up and restore the entire repository, via a single git clone
command. With the main copy on our automatically-backed-up office server, it’s a pretty tough combination to beat.
If you’re setting up a web server, give it a try yourself. I’m pretty sure you’ll like it, especially if you’ve dealt with the other methods.
It’s a very old sysadmin trick to use CVS on your /etc files, this is a modern variation on the same principle.
That doesn’t of course, negate the fact that this is pretty neat. 🙂
I wasn’t aware of that. It makes sense though, and I should probably do something similar for the few
/etc
files that I regularly change.The neat part of the trick is that the post-commit hook applies the changes to the site instantly. Without that, it’s just another Git repository.
Yeah, most CVSing of /etc implementations I’ve seen don’t have that “instant commit” though that could easily be scripted of course.
It wouldn’t make sense for that.
Yeah, I could imagine… “let’s put rm -rf / in /etc/init.d/foo … hmm, let’s not. Oops! Instant commit!!!!1111” 😉