Friday, July 15, 2016

RTL-SDR: Logging home energy consumption

I have been using an energy meter for a while to track the energy consumption at home. The system consists of a transmitter and a receiver. The transmitter is placed in a meter box with a clamp on the live feed going into the house and the receiver sits in my office. The transmitter transmits the energy consumption in watts(joules/sec) at certain intervals while the receiver converts these readings from the transmitter to energy consumed in KW-hr and tracks the consumption for a day, week, month and year.

While this system works and allows me to track consumption, I wanted a bit more flexibility in the way energy consumption was displayed. For example, one of the requirements was to break down the energy consumption for various times in the day.

There were multiple solutions available.
1) Use energy consumption monitors which also provide a cloud based service reporting on energy consumption. I rejected this as I am locked into using a particular vendor who may one day decide to shut shop.
2) There are some monitors available which allow you to download data off the the receiver which you can then use. This is a manual process which requires you to connect to the device with a USB cable to pull the data.
3) Find another method to capture the data sent by the transmitter directly and process it. This is the method I finally chose.

The energy consumption monitor I have is an Efergy Elite Classic(Bought second hand on ebay for GBP 21). The transmitter operates in the 433 MHz range and can be set to transmit data in 6/12/18 second intervals.
I also purchased a RTL-SDR USB dongle(GBP 11.99, again bought off ebay) which allows me to tune into various frequencies using software on a computer. You can also use the software(I use gqrx on linux) to scan and determine exactly which frequencies have data being transmitted on them.
An idle raspberry pi was requisitioned for this project and will happily use the RTL-SDR dongle.

Looking around for ideas, I came across the project rtl_433 which uses rtl-sdr libraries and has a large number of decoders already in place. It scans the 433 MHz frequency and automatically decodes data packets for any of the various supported devices. The utility already supports the Efergy energy consumption monitor which is a big plus.

Once the dongle is connected and rtl_433 built, we can simply decode calls over the wire with the command
rtl_433 -f 433550000 -R 36
By default, the rtl_433 utility tunes to 433920000 Hz. The Efergy transmitter transmits on 433550000 Hz I have to pass this frequency with the -f option. This is where I used gqrx to determine the frequency which Efergy transmitter transmits on.

The output is of the form
Power consumption at 110 volts: 92.40 watts
Power consumption at 115 volts: 96.60 watts
Power consumption at 120 volts: 100.80 watts
Power consumption at 220 volts: 184.80 watts
Power consumption at 230 volts: 193.20 watts
Power consumption at 240 volts: 201.60 watts
The mains power in the UK is at 240 volts and it shows my consumption at that voltage to be 201.60 watts. The data in watts is then further processed using a combination of grep, sed and a python script and upload it to Thingspeak. 

Thingspeak is aimed at IoT applications. Thingspeak provides you a REST api to record data. This data can be analysed and used as needed. As a bonus, there are a few Android apps available too which can read the data off the Thingspeak platform.

I first started recording the data in watts updating the Thingspeak channel every time the transmitter transmitted. However the resulting graph was difficult to read because of the huge spikes every time I turned the kettle or the microwave on. To give me a better understanding of my energy consumption, I converted the watts to KW-hr and update the Thingspeak channel every hour instead. This gives me consumption for an hour instead. I preferred looking at the readings in KW-hr as my energy company charges me per KW-hr.

The resulting graph:


Sunday, June 26, 2016

MQTT

MQTT is a lightweight protocol on top of TCP/IP which is used to transfer data between nodes. It was built to be used by devices having low network bandwidth. The low overheads in using this protocol has made it a popular choice for use with IoT devices.

The protocol uses a subscribe/publish method wherein both the data source and the recipient subscribe to a topic on a broker. This model is similar to twitter where a tweet sent by a user can be read by multiple users who subscribe to the user.

A broker is a software run on a more capable hardware which acts as a data distributor. A topic is similar to a subject and is built in a tree structure with its components separated by a '/'. Wildcards are allowed when describing topics to subscribe to.  Mosquitto(http://mosquitto.org/) is the most popular opensource broker at this moment.

Clients connect to the broker and subscribe to the topic. A client wishing to transmit its data(the sender) will publish data for the topic on the broker. Subscribers to the topic on the broker receive the data and process it further. MQTT is data agnostic and the data can take any format the sender wishes.

MQTT apis are available for multiple programming languages at the eclipse paho website.

The PubSub client library is available for use with arduinos.

Some use cases for MQTT are
a) Sensors such as temperature sensors periodically sending information about its surroundings.
b) Sending commands to a device from various sources.

An example project which uses MQTT in an arduino setting is available on the Make website. Here  the automation software - openhab is configured with MQTT bindings to send commands specifying the colour required to the broker(mosquitto) running on a raspberry pi. The ESP8266 connected to a LED strip subscribes to this broker reads the colour data off the broker and sets the light colour accordingly.

More information is on the protocol is available in the mqtt man page at
http://mosquitto.org/man/mqtt-7.html

Wednesday, June 01, 2016

Using Mock

The article is from my notes on setting up a RHEL 7 build environment on a RHEL 6 machine. This can be used to setup a build environment for any of my guest on the host machine.

I have a RHEL 6 based server grade machine which hosts vms belonging to RHEL 6, RHEL 7 and Fedora rawhide. In some instances, I need to be able to compile kernels based on the git tree for a certain guest machine. Trying to do this on a vm usually means I need to export the git tree over NFS and given the resources the vm has, takes a long time. The idea is to compile kernels on the more capable host machine and simply run "make install modules_install" on the guest system.

I export the filesystem containing the git trees using NFS
# cat /etc/exports
/NotBackedUp/scratch 192.168.140.0/24(rw,no_root_squash)


To setup mock,

1) Install mock.
# yum install -y mock

2) Set the epel7 environment as the default environment.
cd /etc/mock; ln -s epel-7-kernel-x86_64.cfg default.cfg

3) Edit epel-7-kernel-x86_64.cfg and add the lines below.

Since the environment is intended to build kernels, install the required dependencies along with some other tools.
config_opts['install'] ='elfutils dwarves rpm-build flex bison inotify-tools bc openssl openssl-devel openssh-clients rsync'
Allow bind mounts. Mount the local directory /NotBackedUp/scratch to /scratch in the mock environment
config_opts['plugin_conf']['bind_mount_enable'] = True
config_opts['plugin_conf']['bind_mount_opts']['dirs'].append(('/NotBackedUp/scratch', '/scratch' ))
4) Add user and group mock

5) Switch over to user mock and initialise the mock environment using the command 
$ mock --init

6) Switch over to the mock shell
$ mock shell

At this point, we are in a chroot shell as a root user in the mock EPEL 7 environment. The directory containing the git tree is available at the directory specified in the mock configuration file. In my case, this is /scratch. 

7) On the guest machine, make sure you mount the exported NFS share and run 
make menuconfig/oldconfig/localmodconfig
to setup the .config file. 

8) Switch back over to the mock environment on the host machine and run 
make; make modules
to build the kernel and the modules.

9) Once the build is done, on the guest vm, go back to the git tree and run
make install modules_install
to install the newly built kernel.

10) Setup grub on the guest machine and reboot into the new kernel.


Setting a frost alarm on Home assistant

One of the issues with winter is the possibility of ice covering your windscreen which needs to be cleared before you can drive. Clearing ou...