I have been experimenting with the Heroku scheduler.
This means creating a custom process types.
We found a fun exception: You can’t put hypens in the name.
It does not fail with any kind of error, it just fails to install.
Random outpourings of a software developer
I have been experimenting with the Heroku scheduler.
This means creating a custom process types.
We found a fun exception: You can’t put hypens in the name.
It does not fail with any kind of error, it just fails to install.
Here is an example of using Angular to add d3 as a directive:
http://odiseo.net/angularjs/proper-use-of-d3-js-with-angular-directives
This looks like the obvious library to try using google cloud with:
I have been experimenting with using google Bigquery recently.
It looks to be an incredibly useful (and cheap) means of storing and using lots of data.
In fact it is so cheap that asking management for permission in a meeting can cost more in staff time than a years use of the data. This means that with the right policy it is possible to store whatever data you think might be of use.
The downside is the documentation. It looks comprehensive at first until you try to use it for something. The java code samples are full of sections such as:
// Insert useful code here
There does not even seem to be a comprehensive list of the datatypes so here goes:
STRING
BOOLEAN
TIMESTAMP
INTEGER
FLOAT
These fill in for where there should be key points (such as defining the fields in a Bigquery table).
Bigquery does have a sensible structure:
Projects are for billing.
These contain datasets that are used for access control (we have a set of these per environment : qa, demo).
Datasets contain tables (and views – but I have not used these yet).
http://www.liquibase.org/documentation/command_line.html
This could provide an easy means of building a tool to check the validity of a liquibase migration script (it would be good to use this as a preview).
I would recommend using the xml version of the migration scripts. YAML is too odd and has so little tool support. We lost an hours development time by a misplaced space in an index file…
This is a useful article on using S3 as a private maven repository:
http://jmchung.github.io/blog/2015/03/01/using-amazon-s3-as-a-private-maven-repository/
It is better to use this than try to use a git repository – git repos make inefficient mavern repos.
They will send a phone to an address that is not associated with the bank details that they have!
Vodafone have just fallen victim to a classic scam.
A courier delivers a phone to an address (unexpectedly). Half an hour later a motorcyclist collects the “mistaken delivery”.
The delivery company is genuine, the phone is real yet no actual bank details have been checked. Vodafone have sent a phone out without checking the address associated with the bank account that is paying for it.
In addition the customer services automated phone system requires you to have an account number before they will talk to you.
The online chat system (clearly staffed from India) is more useful and they are able to provide details of the fraud team number – but even that maze has a few dead ends. You get to choose 1 for fraud or other phone problems, but the next menu can’t cope with general fraud.
I have been a fan of Neo4j for a while. Until recently I had only been using it locally on my machine to investigate a few simple uses.
Recently I have found about the heroku addon graphenedb (https://elements.heroku.com/addons/graphenedb). This is a relatively cheap (for small graphs it is free) cloud installation of Neo4j.
This meant with a little effort I could have a set of scripts extract the current state of my employers CMS system and then provide an upto date view of the data in a trivially queriable form.
I have found that the easiest way to get control over the graph is to simply start from scratch each day. The script extracts a set of nodes and a list of relationships between them and loads it up. I know that it would be much faster to use the csv import tools but I have just to find a satisfactory means of getting the labels to be set correctly.
This is the useful error message that my mother got when trying to upgrade Free AVG 2015 on windows 10.
To skip to the end of the story look for a utility called AVGRemover on their site – it will uninstall all AVG products cleanly allowing new ones to be installed.
I was called on to help. To start with it was using the latest installer to upgrade. It goes most of the way through the process (including a reboot) then fails with the title message.
OK, so the next attempt was to uninstall the previous version. This turned out to be more tricky than needed since the avgagent service fails to uninstall with the cryptic message that you do not have enough permissions to uninstall it.
Repeat running the uninstaller as admin, same result.
Attempted to stop the avgagent service as admin – still prevented.
Searches of the avg forum found a help operator had suggested using another utility (which failed to work even for the questioner).
Only when the feedback form had been given a detailed list of the problem did it provide a link to the uninstaller.
This is not a very friendly experience for an end user (or indeed an experienced software developer). If AVG know that there uninstaller fails enough that they put a repair utility on the site at least have the decency to link to it directly.
I have just upgraded my home machine to Windows 10.
The upgrade was horribly slow (3 evenings of waiting for updates).
So far it looks like Windows 8 with Start is Back enabled.
One minor annoyance – it disabled the powershell execution policy.
So far its a bit Meh – but that is how upgrades should be.