Moving from gitolite to gogs

Problem: You have a ton of git repositories in gitolite and you’d like to switch to the github-esq gui provided by gogs.

Gogs is super easy to get setup and has thoughtfully added tools which make it useful for a private intranet type setup. It has not however come up with a great way of mass-importing git repositories from another tool.

The web interface include a “migration” tool which can be completed one at a time. I had 150 git repos to migrate so I added a repo and then polked at the database

Gogs also uses bare repositories just like gitolite. Loading them into gogs is as easy as rsync’ing them into the gogs-repo directory and adding some rows to the gogs database so gogs knows how to administer them.

I made a script to help me with that task. Note I’m using mysql as my database and my repos default to private.

With this, it took about a second to import all these repos. I did find one other person who had batch imported repos and chose to do it with curl but I couldn’t get it to work. All in all, it took about 4 days to figure out how to get gogs setup and get all the repos into it. Hopefully this script makes that process much quicker for you.

json to csv download in the browser

While using a javascript based datagrid that was displaying json data provided to a page, I thought it smart to not have to re-download all the data again just to make export a csv file.

My solution was to use the “Papa Parse” library to create the CSV file and use URL.createObjectURL() to actually make local data appear as a file which can be downloaded.

I also tried directly setting base64 encoded data to the window like this solution but it choked on my 6-10MB report results.

jQuery(document).ready(function() {
 //My json formatted data from php.
 var dataSet = <?=json_encode($data)?>;
 //select our link, create an objectURL Blob of CSV data.
 jQuery('#downloadCsv').attr('href',URL.createObjectURL(new Blob([Papa.unparse(dataSet)], { type:"text/csv" } ) ) );
 });

and the html for the link:

<a id="downloadCsv" class="button button-secondary" download="reportdata.csv">Download CSV</a>

 

Debugging git push/clone issues with ip6

Problem: You're doing "git push" or "git clone" and it takes forever to connect.

This could be a lot of things. In my case it was a firewall issue on a dual stack server hosting my repository.

How can you tell?

git config -global core.sshCommand "ssh -vvv"

This enabled verbose mode for SSH. Now try to clone your repo and you'll get a lot more information.

In my case, it was a DNS resolution problem.

[email protected]:test $ git pull origin master
OpenSSH_7.4p1, LibreSSL 2.5.0
debug1: Reading configuration data /Users/dkilgo/.ssh/config
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: resolving "test.me" port 49001
debug2: ssh_connect_direct: needpriv 0
debug1: Connecting to test.me [xxxx:xx:xxxx:x:xxx:xx:xx:xxx] port 49001.
debug1: connect to address xxxx:xx:xxxx:x:xxxx:xx:xx:xxx port 49001: Operation timed out
debug1: Connecting to test.me [000.00.00.000] port 49001.
debug1: Connection established.
….

I got a timeout on the ip6 address. The SSH port on this dual-stack server is being blocked. Once it fails, git will try ip4 which works.

Since I don't have control over the network here, I can work around this issue with ssh's config.

How do I fix this without breaking everything else that uses ip6?

You could just disable ip6 in your network settings. In 2017, that's not the most practical solution. We can do that for just one host in our SSH settings and get the same effect.

Edit your ~/.ssh/conf and add

Host test.me
AddressFamily inet

This tells ssh to only use ip4 for this host. This makes my one buggy host happy and doesn't mess with the rest of the internet. If you discover ip4 is the problem for you and ip6 works, set that to "inet6" instead. 

Don't forget to turn off that verbose logging we turned on earlier.

git config -global core.sshCommand "ssh"

 

If this didn't fix your issue, try git's debug options.

GIT_CURL_VERBOSE=1 GIT_TRACE=1 git pull origin master

The GIT_TRACE environment var makes git output more debug info during any operation. Setting it like this means it will only impact this command.

If you want this to work for any user or app, you can export that var.

EXPORT GIT_TRACE=1

and any call to git will output debug info, even from a tool like source tree or git embeded in an IDE. More info on git environment variables.

Using EFF’s CertBot with Apache 2.2 and CentOS 6

EFF has created a wonderful tool called CertBot to automate the retrieval and installation of letsencrypt certificates. The documentation is really good but it did require a little trial and error to get things working. Here’s a walkthrough with some of the gotcha’s I encountered.

Why do I need CertBot?

Letsencrypt certificates are only valid for 90 days. The verification and update process is a tedious process to complete manually. CertBot automates the retrieval and renewing certificates, it has a built-in webserver which can stand-in during the verification step, and it can modify your web server config and install the certificates if you happen to be using nginx or apache 2.4

Install:

Before you start, you should apply all software updates for your OS and restart. One of the updates overwrote a config file I customized which made the web server fail to start.

I was using CentOS 6 which doesn’t have a built-in package of CertBot. EFF also provides a version which can install its own dependencies called “CertBot Auto”.

Here’s how to install that:

[email protected]:~$ wget https://dl.eff.org/certbot-auto
[email protected]:~$ chmod a=rx ./certbot-auto
[email protected]:~$ ./certbot-auto --help

I’d also recommend moving “certbot-auto” into “/usr/local/bin” so it’s available to other users or cron scripts.

[email protected]:~$ mv ./certbot-auto /usr/local/bin/certbot

Usage:

The docs are great. Have a look. https://certbot.eff.org/docs/using.html

My site redirects all traffic to https which breaks the verification process which needs to happen over http. The “standalone” mode solves this. It requires that you stop the web server and cert-bot will then answer on port 80 to complete the registration process. The CLI also defines pre-update and post-update “hooks” which can be used to stop and start your web server automatically to minimise down time.

For my domain this looked something like:

[email protected]:~$ certbot certonly --standalone -d www.your.site --pre-hook "service httpd stop" --post-hook "service httpd start"

If you’re setup includes both http and https, you can use apache to serve the verification files.

sudo certbot-auto certonly --webroot -w /var/www/vhosts/default/html -d www.your.site -d your.site

You can include up to 10 domains on each certificate.

Where does your certificate get stored?

Good question. On CentOS, certbot creates a /etc/letsencrypt folder. Each certificate you generate will have a folder in the “live” folder. The commands above created a /etc/letsencrypt/live/www.your.site with your certificates files.

Here’s what you’ll need to add to your httpd.conf or virtual host to use this new certificate.

#Using letsencrypt certificates.
SSLCertificateFile /etc/letsencrypt/live/www.your.site/cert.pem
SSLCertificateKeyFile /etc/letsencrypt/live/www.your.site/privkey.pem
SSLCertificateChainFile /etc/letsencrypt/live/www.your.site/chain.pem

Renewal

The last thing to setup is cron task to automate renewal of our certificate(s).
“certbot renew -n” attempts to renew any certificates expiring in less than 30 days. The -n makes it non-interactive which is ideal cron task.
And that’s it. You’re all setup to take advantage of a letsencrypt certificate with centos and apache 2.2.
It was much easier than I though and after a lot of poking through the documentation I was able to piece together what needed to be done. Hopefully this saves you a few hours and a couple of server alerts.

Just one more thing. Ever heard of CCA?

One more wrinkle here is CCA checking. These freely available certificates mean that it’s not that hard to create a certificate which appears valid for a popular website. CCA checking adds a DNS record which informs certbot and other clients which authorities are allowed to register certificates for a given domain. All letsencrypt CAs will require this step after Sept. 2017. Because this requires a new DNS record type, not every host or registry is ready for this. I’m using namescheap for my .site address and it’s currently unsupported. Check with your domain registrar or whoever is providing your DNS service if they support CCA records. You might have to choose a new DNS provider if your registrar doesn’t if you want to continue using letsencrypt certificates.

I love Cockpit CMS

I've had a lot of fun playing with Cockpit CMS.
It's a "An API-driven CMS without forcing you to make compromises in how you implement your site."

Its a CMS without a front end. The front end is completely up to you. This tool takes care of all the back-end stuff. It provides a UI for building content types, uploading media, editing media and content, doing site backups and authentication. How the data is displayed is entierly up to you.

Built with a custom microframework called "Lime" which looks alot like slim2 and a storage system called "Mongo lite", it provides everything you need to build small sites that are a joy to work with.

The documentation includes a walk-through on how to build a simple blog.