Continuing in the interest of providing multiple file formats for my web pages, I now have my home page available in XHTML5 format. URL’s just need .xhtml tacked on the end, except for the home page, which needs index.xhtml. This makes use of Symfony’s _format URL parameter.
WWW posts page 30
Finding short TLD’s
I’ve been looking for a short domain to potentially use for permashortlinks. For a domain to be usefully short, it must have both a short TLD and short SLD. Having three characters each would make for seven total characters (including the period) for the domain. Much more than that and it starts to lose its usefulness. There are no one character TLD‘s (though they’d be great for permashortlinks). Two character TLD‘s are reserved for country codes. I’m a bit reluctant to use a code for a country I don’t live in, and the one I do disallows whois privacy. I’m a bit reluctant to decide that my address, phone number and email address will be “perma”nently available for all to see (assuming I keep the permanent promise of of permashortlinks). So three characters have been where I’ve been doing most of my looking.
There are a number of good lists of available TLD‘s. Indiewebcamp has a list of options with a brief blurb on their fitness and possible problems. It only has country code domains though. United Domains has a list with current TLD‘s and their prices plus soon to be available TLD‘s. It has a page for each with some information about the TLD and marketing-speak thoughts on uses. Name.com has a list with per-TLD pages as well that are often more brief. It’s hard to parse these lists to find just the short ones though.
I found two plain-text lists of TLD‘s (IANA’s and publicsuffix’s), which got me to thinking that I could parse these to find just the ones with three characters. I wrote a script in PHP and modified it to handle any number of characters. It looks like:
Continue reading post "Finding short TLD’s"Check request compression savings
Gzip compression is almost universally recommended as a basic step to improving site performance. It basically uses a little bit of extra processing on the server and client to significantly reduce the transfer size of most text responses. In Apache, this is done with mod_deflate (see the H5BP config for an example of how to set this up).
A while back, I was setting gzip up on my server, and wanted a simple way to verify that it was working and check how much transfer was saved. One simple way to verify it is working is with curl on the command line. If you run curl -I -H 'Accept-Encoding: gzip,deflate' example.com and see the header Content-Encoding: gzip, compression is working. To test the transfer savings, I wrote a simple script using PHP’s curl library. It makes a request with and without the Accept-Encoding: gzip,deflate header, and compares the transfer data info provided by curl_getinfo().
Dreamhost now has PHP 7, so I’ve switched my main sites to it. Seem at least slightly faster.
Idea: Single character TLDs for permashortlinks
ICANN could make available single character TLD’s for URL shortening purposes, and make available on them SLD’s of one or more characters.
Continue reading post "Idea: Single character TLDs for permashortlinks"Cool tool for choosing from various easings and getting their CSS transition cubic-bezier values (if applicable): easings.net.
Logging service worker cache headers
As part of the service worker API, a cache interface has been provided to manage cached request-response pairs. In working on the service worker for my site, I wanted to see what headers the cached requests and responses had, but due to the asynchronous way many of the cache properties are accessed, this was a bit verbose. I wrote out a script that I could paste in the JS console to look at all stored request-response pairs in a given cache so I could examine them:
caches.open('cache-name').then(function(_cache){
_cache.keys().then(function(_keys){
_keys.forEach(function(_request){
var _requestLog = [];
_requestLog.push(['request', _request.url, _request]);
_request.headers.forEach(function(){
_requestLog.push(['request header', arguments]);
});
_cache.match(_request).then(function(_response){
_requestLog.push(['reponse', _response]);
_response.headers.forEach(function(){
_requestLog.push(['response header', arguments]);
});
}).then(function(){
_requestLog.forEach(function(_item){
console.log.apply(console, _item);
});
});
});
});
});
Replace cache-name with whatever key you’re using for your cache. Be warned that this will produce a long log if you’ve got more than a few items in the cache. You can also see just the requests you have a cache for with something like:
caches.open('cache-name').then(function(_cache){
_cache.keys().then(function(_keys){
_keys.forEach(function(_request){
console.log(['request', _request.url, _request]);
});
});
});
Self-signed certificate for testing
In playing with service workers, I set up a self-signed SSL certificate for my local development environment. I used instructions from debian.org. It was very simple, since I didn’t need the security involved with a real operating site. Creating the certs took a single command:
openssl req -new -x509 -days 365 -nodes -out /path/to/server/config/certs/sitename.pem -keyout /path/to/server/config/certs/sitename.key
You then just need to set things up in the server configuration (Apache in my case). mod_ssl must be installed and enabled, which looks something like:
First play with service workers
I started playing with service workers as a client side cache manager a bit tonight. I’m using this Smashing Magazine article as a guide. I’ve been reading a bit here and there about them, intrigued by their role in making web sites installable as apps and their ability to allow sites to function even while offline. However, my site’s current lack of pages and other priorities plus the learning curve and things that have to be done to set them up kept me from playing with them until now.
Workers require HTTPS, unless, luckily, you are serving from localhost. I had to modify my local app install to use that instead of the more site-indicative name it was using. They also require placement at or above the path level they apply to, or theoretically a Service-Worker-Allowed header, though I was unable to get that working. I’m assuming this is for some security reason. Because my file is stored in a Symfony bundle and because I am serving multiple sites with the same application, I didn’t want an actual file in my web root. I made a Symfony route and action that passes through the file, like:
On my site, I’m using Apache’s ‘mod_deflate’ and ‘mod_filter’ to compress my compressible responses (mostly text), with a setup based on h5bp’s server config. I got my sites running over HTTPS recently, and today, when looking at my site performance with webpagetest.org, I noticed that my content wasn’t compressing. It was still working fine over HTTP. I noticed in h5bp’s comments that <IfModule mod_filter.c> could be removed in Apache versions below 2.3.x. I removed it, and sure enough, compression was working again. I’m not sure why it’s different depending on what protocol I use. Perhaps Dreamhost has separate versions of Apache running for the two protocols. Or perhaps it’s just something different about the configuration in the virtual hosts. Regardless, it’s working now. I just hope this doesn’t cause problems whenever they move to Apache 2.4.