My bridge program consists of a web page that interacts with a program that runs as a node.js application. All of the code is in a zip archive available on my Google Drive.
The node.js application can run on the same computer as the web page or on a different computer. The version I use for development and testing runs on my home computer. The version that others can use on the Internet runs on a virtual Linux instance at a Linode data center; I rent that instance by the month (it’s cheap—don’t worry ‘bout me).
I realized the other day that this setup has been running for about a year, and that I did not document how I installed and configured it. This post makes up for that lack.
My goal when I started development was to use bare-bones HTML, CSS, and Javascript. I wanted to avoid sinking into the quicksand of frameworks and development platforms, because I know from experience that those things can become fascinating areas of study in themselves, and I really just wanted a working bridge deal generator, not to be a web ninja.
I kept it bare-bones when the Javascript was all running inside my HTML page. But eventually I did want the thing to run on a web server. I could not find a framework-free way to do that, but I found some marvelous tools with very low barriers to entry.
So, to get the programs necessary to run my bridge program, you need to install node.js, express, cors, and sqlite on your server. For everything I mention here, there are working installation guides for Mac, Linux, and Windows. I have the tech running on Mac (home computer) and on Linux (Ubuntu, to be specific). I had no problems installing on both platforms.
For node.js and express, see this page for an introduction.
I did not read every lesson they link to because that way lies web ninja-hood. I read the introduction, then I followed the steps in the node and express installation guide. After I verified that I had a working node express installation, I then did the programming work necessary to split my bridge logic out from the Javascript embedded in the HTML file. That was a one-time effort that I will not describe here.
It may be that if I were to reinstall my program with the documented dependencies, the magical npm installation program would go get and install those dependencies. But just in case, here are links to the sites where my dependencies live.
Cors, which lets you actually call your program without being prevented by a very persnickety security enforcer, lives here.
Better-sqlite3, which provides a programmatic interface to sqlite, lives here. My program doesn’t actually use sqlite3 (yet), but while developing it I wrote some database tools to analyze results. I will surely do more of that in the future, and the main application may eventually need a database, so let’s just always include it.
Now I’m going to lay some code on you. This is probably not strictly necessary because all of these files are in the zip archive. But they are relevant to the setup process and to how the program is run, so there will be code.
My code should unpack into a folder called “bridgeOnTheWeb”. But what I would do if I were installing this from scratch is unpack the archive somewhere else, then I would create a fresh installation under the folder where I want the application to run, using these steps (these are all in the node/express guides linked earlier in this post, but with different application names):
mkdir bridgeOnTheWeb
cd bridgeOnTheWeb
npm init
npm install express
# I don't recall if I did this, but at some point I must have:
npm install cors
npm install better-sqlite3
Those steps should populate the folders needed by node and the dependent programs. Now you can selectively copy my application folders in from wherever you unpacked the code archive. That’s basically the “public” folder.
There’s a file called package.json in the base directory of a node application. Make sure it lists your dependencies. Here’s what mine looks like:
{
"name": "bridgestaticpage",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"better-sqlite3": "^7.4.4",
"cors": "^2.8.5",
"express": "^4.17.1"
}
}
There’s a file in the base directory that contains the server code itself. That code is basically an express web server with your application’s callable entry points. Mine is called index.js and it looks like this:
const express = require('express');
const app = express();
const port = 3001;
app.use(require('cors')());
app.use(express.static('public'));
app.get('/getDeal', (req, res) => {
var bcs = require('./public/js/bridgeClasses');
var bgm = require('./public/js/bridgeGameMechanics');
var theBoardNumber = req.query.boardNumber;
var bridgeTable = bgm.startGame(req.query.gameLoadType,
req.query.dealerPosition,
req.query.vulnerabilityType,
req.query.customUseLengthPoints,
req.query.customUseDummyPoints,
req.query.customOpeningType,
req.query.partnerPointsSpec,
req.query.lhoPointsSpec,
req.query.rhoPointsSpec,
req.query.folderName,
theBoardNumber,
req.query.manualDealSpec,
req.query.shuffleLimit);
var theStatusCode = bridgeTable.getStatusCode();
var theStatusMessage = bridgeTable.getStatusMessage();
var theDealString = bridgeTable.getDealInUnPortableBridgeNotation();
var theLogString = bridgeTable.getGameLog();
var thePBNFileString = bridgeTable.getPBNString(theBoardNumber);
var theLINFileString = bridgeTable.getLINString(theBoardNumber);
res.send(theStatusCode + "ABCXYZ" +
theStatusMessage + "ABCXYZ" +
theDealString + "ABCXYZ" +
theLogString + "ABCXYZ" +
thePBNFileString + "ABCXYZ" +
theLINFileString);
});
app.get('/', (req, res) => {
res.send('Hello World');
});
app.listen(port, () => console.log(`bridge program is listening on port ${port}.`));
I chose to use port 3001. You can use whatever port is available on your server.
The client code uses a program called FileSaver.js to, you know, save files generated by the application. That program should be in the public/js folder of the bridge application, and you can also get a newer version of it at this link. This is not server code so you don’t install it using npm; instead, just copy the file into the folder and your client-side code can call it.
Most of the Javascript code in my application is free from any obvious awareness of its environment and the web server. The main exception to this is a file in public/js called bridgePageHandling.js. I tried to keep all code related to interacting with the HTML page in this one program file. It includes a line at the top that specifies the URL and port where the web server (our node express program) is listening. I will eventually move this off to a configuration file, but for now you need to be aware of it and make sure it points to a valid address and port. You can see here that on my local server it points to localhost, and the commented-out line is the line I activate when running this code on my remote Linode server.
//const serverURL = 'http://45.56.115.218:3001';
const serverURL = 'http://localhost:3001';
Once you have it all set up, you’re ready to run! In the bridgeOnTheWeb folder, run this on the command line:
node index.js &
I include the “&” so that I get control of the command line. If you don’t include it, you lose control of the command line for as long as your server program runs. That is basic command line geekery but I’m mainly reminding myself to use the “&”.
Within a second or two you should see this message on the command line:
iMac:bridgeOnTheWeb masterponomo$ node index.js &
[1] 30317
iMac:bridgeOnTheWeb masterponomo$ bridge program is listening on port 3001.
Very nice. It even tells you what port it is listening on. You can now point your browser to the URL and port named in your bridgePageHandling.js file. For instance, when using my local program I put this in the address bar of my browser:
localhost:3001
What’s that 30317 that shows up right after you run the server, you ask? That is the process number assigned to the server by the operating system. It is useful in case you need to kill the server from the command line. You don’t have to remember it, though, because you can find it with the ps command like this:
iMac:bridgeOnTheWeb masterponomo$ ps -ax | grep index
3026 ?? 0:18.24 /System/Library/PrivateFrameworks/AssistantServices.framework/Versions/A/XPCServices/media-indexer.xpc/Contents/MacOS/media-indexer
30317 ttys000 0:00.13 node index.js
30322 ttys000 0:00.00 grep index
iMac:bridgeOnTheWeb masterponomo$
And then if you need to kill it you just do this:
kill -9 30317
A reminder: if you change any of the server-side Javascript programs, you need to kill and restart the server program and then hit the refresh button on your browser to reload the web page.
And that’s it!
Update on February 7, 2023: Whoa! That wasn’t it! After more than a year of uptime, my Linode server was rebooted by a program called Lassie that revives crashed servers. I have no idea whether my server actually crashed or was shut down for some reason by Linode, but in any case it went down and was rebooted. I had not set up anything to automatically run my bridge application server, so my website was dead for a couple of days until I went in and manually started up the bridge program.
I have now added a cron job to revive the bridge program if necessary. If you decide to run my program on your own server, you may want to do this too. The revive script and the cron job to run it are in the development folder, but I will step you through them here. They will need modifications to work on anything other than my Linode because of course your path names will be different.
The script to run the bridge server program if it is not already running is called isBridgeRunning.sh. All the variable-setting is something I did because I had an awful time getting cron to run the thing and I started monkeying around with permissions and variables both in this script and in the crontab. Please note that my path names will not all exist on your system, so you must customize this. On the Linode server I got the PATH by doing:
echo $PATH >> isBridgeRunning.sh
to get it into the bottom of the script, then I used that text to make the PATH variable setting at the top. I cut and paste the NVM_DIR setting from my .bashrc file. I also did a bunch of magical mystery stuff in the crontab, which I ultimately was able to remove, so no idea whether these variables in my script are necessary. The crontab magic included putting my user ID as the 6th parameter on each crontab line, to tell cron to run my commands as me, not root. It matters, I think, because my node installation is under my user ID. Well, for all I know the cron job already runs as me by virtue of running from my user ID’s crontab, because you can see I no longer put the user ID in the crontab file. But I did that user ID thing early on because at first nothing seemed to run and I could find no cron log as evidence of what was going on. Hence all the monkeying. At one point I had environment variables in the crontab AND in the bridge script, plus the user ID in the crontab. I also tried all kinds of variations on partially- and fully-specifying path names. As you can see, I wound up with a fairly simple crontab but I decided to keep the variable exports in the bridge script just because I was so relieved it finally decided to work.
I should have mentioned up front that along with not being (or wanting to be) a web ninja, I am also barely functional in *nix and its many delightful programs. Since I do not have the bandwidth to master everything, I have adventures when I try to do something that is probably fairly easy for the ninja folk.
If you are a cron jockey, please do that simple thing you know how to do with your own system and ignore my machinations. The main action starts with the “ps” command:
export PATH=/home/masterponomo/.nvm/versions/node/v16.11.1/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
ps -ax | { grep -v Killed || true; } | { grep -v color || true; } | { grep -v grep || true; } | grep index.js > /dev/null
if [ $? -ne 0 ]; then
$(which node) /home/masterponomo/bridgeOnTheWeb/index.js &
fi
The cron job is installed via “crontab -e” and then editing the file that is opened unto you. The commented-out line that runs the date program is there just so I can uncomment it as needed to make sure cron is working for a very simple case if I encounter problems with the line to restart my bridge program. Both lines emit little text files in my development directory if the cron jobs run correctly. If those text files don’t show up, or if they show up but have 0 bytes of data, then something is wrong.
# Edit this file to introduce tasks to be run by cron.
#
# Each task to run has to be defined through a single line
# indicating with different fields when the task will be run
# and what command to run for the task
#
# To define the time you can provide concrete values for
# minute (m), hour (h), day of month (dom), month (mon),
# and day of week (dow) or use '*' in these fields (for 'any').
#
# Notice that tasks will be started based on the cron's system
# daemon's notion of time and timezones.
#
# Output of the crontab jobs (including errors) is sent through
# email to the user the crontab file belongs to (unless redirected).
#
# For example, you can run a backup of all your user accounts
# at 5 a.m every week with:
# 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
#
# For more information see the manual pages of crontab(5) and cron(8)
#
# m h dom mon dow command
#*/1 * * * * date > /home/masterponomo/bridgeOnTheWeb/cjjnode.txt
*/5 * * * * cd /home/masterponomo/bridgeOnTheWeb && /home/masterponomo/bridgeOnTheWeb/isBridgeRunning.sh > /home/masterponomo/bridgeOnTheWeb/cjj.txt
Update on November 21, 2023. Gosh, in hindsight I realize I kind of just ended things abruptly—an ominous hint that you might do something wrong, followed by a whomping big block of code. I’ll try to do better this time because—you guessed it—there’s more!
Things were wonderful for a while there. My bridge deal generator runs great as a Node application, both locally and on a remote server. But when I tried to show it off at my local bridge club, which is located in a college campus building, I ran into campus network policy. It would not let us connect to my remote site using a raw IP address and port number. Boo hiss!
But also yay rah, for this inspired me to take action on an item from my to-do list: making my program accessible through a domain name. I’m going to list the steps I took to accomplish that, but I’m not going to go into deep how-to detail because everything I did is copiously documented by others, typically on the websites of the companies or products I mention here.
First, I registered a domain name (actually, I did that two years ago, but out of laziness I never actually set up a site for it). I registered “bridgeoutahead.com” through Network Solutions, Inc.
NSI offers website hosting, but I wanted the domain to run on my Linode (aka Akamai) virtual server. Here’s where I get all hand-wavy: In their help documentation, NSI tells you how to change the DNS nameservers for your domain, and in their documentation, Linode tells you what nameserver names to use at your registrar. So I changed the nameserver setup at NSI to point to the Linode nameservers.
Linode also tells you how to set up domains (that you own) on your server. All you really need to do is tell the Linode nameservers where to route traffic for your domain name. You do that by using their handy “Create Domain” tool which lets you associate a domain (that you own, and for which traffic will arrive via the nameservers) with a destination, which in my case was a private, unchanging IP address that was assigned to my Linode instance when I created it.
If you happened to do some of this setup in reverse order and you saved this nameserver stuff for last, be advised: these DNS changes happen quickly at your registrar and at your virtual server host, but they can take a while—hours or days sometimes, to propagate out into the wider world where domain name lookups actually happen. So don’t expect to be able to route traffic to your server via the domain name immediately after making these changes. Set a spell, take your shoes off, relax and let it come to you in its own good time.
So what now? Does that mean users can key in your domain name followed by a port number, and run the bridge deal program? Yes, yes it does mean that. However, it’s not really ideal because while you are not using a raw IP address, you’re still using a raw port number, and while I can’t swear to this, I think some institutions’ internet access policy may frown on that. What we want is to go in with nothing but user-friendly, non-threatening domain name-type words.
To do that, we need another freaking layer of software: we need a web server that can magically accept words and turn it into an evil(-looking) direct call into a program running on a port. Thinking quickly, I consulted one or two help files online and decided to try NGINX, a purportedly easy-to-use web server.
Two days and many false starts later, I had a working nginx installation. The basic installation as documented on the NGINX website and on the Linode website is quite easy—a single command will install the server and it will start up automatically. But installing the server does not set up your web domains—you have to do that by configuring the web server.
The key thing you need to set up is a server definition for each domain in a file called bridgeoutahead.com that lives in the /etc/nginx/sites-available folder, with a symlink to make it also visible in /etc/nginx/sites-enabled. I put the server definition file into a folder called nginx_goodies/sites-available in the code bundle for my program for reference, but of course it must be installed in the right place on the computer where the program will actually be running.
There’s a program file called bridgePageHandling.js in the code bundle that includes a hard-coded server address at the top of the file. This address is set to “localhost:3001” for running on your local machine, but for running on a remote server it is set to an address that is meaningful on that server—typically, a user-friendly domain name. In my case, for my remote server, it is set to “http://bridgeoutahead/bridgedeal/”.
My nginx server definition file points traffic for the main domain name to /var/www/bridgeoutahead.com, where it loads main.html. But for traffic for the bridge deal program, accessed via http://bridgeoutahead.com/bridgedeal, I pointed it to the folder where I have always run my node application, which is /home/masterponomo/bridgeOnTheWeb. You can see this in the server definition file, but I mention it here to note that by specifying something other than the /var/www subfolder, you have to deal with read and execute permissions on files in the application subfolders. This is because nginx runs as the www-data user by default, and that user only has automatic access to the /var/www folders.
Sysadmin wisdom tells you to only grant the minimum necessary permissions. But I am not wise, and when I encountered “403” and “404” error codes with my setup, I eventually just changed all permissions on my home folder and all its subfolders, including bridgeOnTheWeb, to “777”. I am not securing any money or access to top-secret stuff, so I don’t really care if this makes my server moderately more hackable by evil-doers. I doubt they will ever find the Bitcoin keys I have cleverly hidden in a deep, dark subfolder. Just kidding.
The end result is you can now get to my program through a nifty domain name. Click it and see—if I’m still around to keep things running, it should work.
Sadly, there’s more, but I haven’t done it yet so I’ll have to describe it later. The “more” is that my domain name, as cool as it is, may still violate corporate or campus policy because it uses http instead of the more secure https. But to use https, I will have to obtain and install security certifications, so there’s another multi-day computer-wrangling session, and another update to this post, in my near future.
Rather than end abruptly due to exhaustion, I will take a moment to bid you a fond adieu as I depart.
Adieu.
Update on November 22, 2023. Gosh-a-roonies, I’m back already! It turns out that implementing HTTPS on all of my domains was easy as can be, with no hidden gotchas. I only mention the programs/services involved as reminders to myself in case the handy Linode link goes stale or the how-to goes away.
The steps to implement HTTPS in NGINX, which achieves TLS (transport layer security) are:
Configure firewall rules with UFW. Allow ssh, http, and https.
Install snapd.
Install certbot.
Run certbot to request and install TLS/SSL certificates from Let’s Encrypt. Answer the prompts carefully to make sure all of your domains receive a certificate, and that you provide the correct email address so you will be notified about pending renewals and other useful information. These free certificates expire after 90 days and must be renewed, but by using certbot to request and install them, you let certbot handle your renewals automatically.
I would like to say that there are no more technical hills to climb for installing this thing, but you never know.
Adios.