Показаны сообщения с ярлыком english. Показать все сообщения
Показаны сообщения с ярлыком english. Показать все сообщения

воскресенье, 22 марта 2015 г.

Docker-Jenkins-Slave 2.0 (DJS2)

I often need to build some RPM or test my code on different operating systems that have different environment, libraries versions etc. On my host machine I use Arch Linux, but my job requires to write software for RHEL/CentOS. It's useful to use buildbots like Jenkins to run this builds on automated basis, commencing builds after any commit, whatever.

Anyway, installing jenkins on host machine and using Docker as slaves is useful, but can clog your computer with garbage files that hard to cleanup. DJS2 offers you isolated environment (LXC container) where your Jenkins and all Docker slaves are executed, and only jenkins data folder with jobs is stored on your computer.

So, if you need to test/build your code on different linux-based operating systems, all you need with DJS2 is only LXC-enabled Linux host machine and Vagrant.

You can get it here: https://github.com/rrader/docker-jenkins-slave

New version of DJS is already on GitHub. It's in beta and still not tested well, but is working on my PC. Try it on your PC and send me feedback, I'd be really pleased to see any feedback - about failures and successful runs as well.

It's really easy to deploy vagrant with different OS slaves on your local machine. Now CentOS 6 and CentOS 7 only available to use, but eventually all supported OSs will be migrated (as CentOS 5, Suse, Debian).

Pull-Requests are highly appreciated :)

Following text is part of README, it's instructions how to get Jenkins+Docker+LXC on your computer working.

Prerequisites


  1. Vagrant
Optional (but is really recommended)
  1. vagrant-lxc plugin: vagrant plugin install vagrant-lxc
  2. vagrant-lxc related configuration on Host. See https://github.com/fgrehm/vagrant-lxc/wiki 

Getting started

$ git clone git@github.com:rrader/docker-jenkins-slave.git djs
$ cd djs
$ ./djs.sh up
[lots of vagrant output ....]
==================================================
Jenkins should be available on 10.0.3.74:8080
Start using slaves with adding one
 e.g. # ./djs.sh add centos7-java Luke
Now you should be able to use your just deployed jenkins on http://10.0.3.74:8080 or similar
Now let's add some slaves with ./djs.sh add <image> <name>
$ ./djs.sh add centos7-java Luke
Unable to find image 'antigluk/jenkins-slave-centos6-java' locally
Pulling repository antigluk/jenkins-slave-centos6-java
[lots of docker output ....]
Status: Downloaded newer image for\
    antigluk/jenkins-slave-centos6-java:latest\
    507e2a18674253d0d7d1f5201ee963681704c2d18310af619f9fcbb0124efaf3
Connection to 10.0.3.74 closed.
This will pull centos7-java image from Docker Hub and start it. After this command ends you should be able to see Luke worker in "Build Executor Status" section on Jenkins.


Notes [important]

Only centos6-java and centos7-java are ready to use now with Vagrant Jenkins


Links

DJS Repository: https://github.com/rrader/docker-jenkins-slave

пятница, 13 марта 2015 г.

New CentOS 7 Maven Slave in docker-jenkins-slave

Today I pushed new Jenkins' slave template into docker-jenkins-slave named "centos7-java".

It's based (thank you, Captain Obvious) on CentOS 7 and it's purpose is to build Maven projects, so image has maven preinstalled.

It's pretty usual. But the news here is that new image doesn't contain awful SSH daemon that breaks Docker's philosophy to run single process in container. Now ENTRYPOINT in container is just running Swarm jar itself:

ENTRYPOINT ["java", "-jar", "/root/swarm-client.jar", "-master", "http://172.17.42.1:8070", "-mode", "exclusive", "-executors", "1", "-fsroot", "/root"]

CMD ["-labels", "docker-centos7-java", "-name", "Chewbakka"]

In order to start slave container previously steps were like this:

1) Start container
2) Inspect it's IP
3) Connect by SSH using special keys that should be already inside Docker image
4) Run Swarm jar (by ssh)

Drawbacks of this approach:
1) You can't pass optional arguments to swarm (without editing start script)
2) It violates Docker's philosophy
3) Overcomplicated start script that has to know container's IP address

So, what's new really:
1) You can pass Swarm arguments to "docker run" command itself
2) Slave start script was reduced to just one-liner script without SSH/keys magic
3) Start script contains maven caching volume to share ".m2" local repositories between machines

Links

Dockerfile and scripts: https://github.com/rrader/docker-jenkins-slave/tree/master/centos7-java
Why you don't need to run SSHd in your Docker containers: http://blog.docker.com/2014/06/why-you-dont-need-to-run-sshd-in-docker/
Swarm Available Options: https://wiki.jenkins-ci.org/display/JENKINS/Swarm+Plugin#SwarmPlugin-AvailableOptions

Caching Maven Local Repository in Docker

I'm using Docker instances as Jenkins slaves and run my containers like

# docker run -d --name="Chewbakka" antigluk/jenkins-slave-centos7-java -labels docker-centos7-java -name "Chewbakka"

(actually, I do it using docker-jenkins-slave )

However, it doesn't make sense to keep maven local repository in every container.

It can be done using Docker Data Volumes
We can mount host's directory into any directory inside container if we specify -v argument like this:

-v /tmp/docker-m2cache:/root/.m2:rw

This will mount host's directory /tmp/docker-m2cache into container's /root/.m2

Resulting command will be

# docker run -d --name="Chewbakka" -v /tmp/docker-m2cache:/root/.m2:rw antigluk/jenkins-slave-centos7-java -labels docker-centos7-java -name "Chewbakka"

четверг, 19 февраля 2015 г.

Debug Beeline Client for Hive

Just a note how to enable debug mode in beeline (or any other) Hive client.

To enable remote debugging, we need to pass "-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005" arguments to JVM.

Tricky part is to find the place where JVM is being executed. It's file $HADOOP_HOME/hive-client/bin/ext/beeline.sh - in Hortonworks (HDP) installation it will be /usr/hdp/current/hive-client/bin/ext/beeline.sh

On the line
exec $HADOOP jar ${beelineJarPath} $CLASS $HIVE_OPTS "$@"

but -Xdebug option should be placed to HADOOP_CLIENT_OPTS variable:

export HADOOP_CLIENT_OPTS="$HADOOP_CLIENT_OPTS -Dlog4j.configuration=beeline-log4j.properties -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005"
 


-Xdebug - enables remote debugging
-Xrunjdwp - sets configuration properties, where
server - start server or try to connect to debugger (usually server=y to do remote debug via Idea)
suspend - freeze start of program and wait until debugger connects to it
address -  port of server you will connect to

Now if you just start # beeline
you'll see that debug server started on port 5005, and you can connect to it via Idea (or whatever).

среда, 4 февраля 2015 г.

вторник, 2 сентября 2014 г.

Docker Jenkins Slave Generator


tl;dr: Service for generating Dockerfile for Jenkins slaves is up and running here: http://docker-jenkins-slave.herokuapp.com/ .

Docker jenkins slave


The beginning of story is here: http://antigluk.blogspot.com/2013/10/docker-jenkins.html [russian].
Abstract: If you want to build your project on different environments (e.g. Jenkins installed on Arch Linux, and you want to build RPM on CentOS 6) with jenkins, but you don't want to use virtual machines, which is inefficient wasting of RAM and CPU, using Docker is good idea.

среда, 30 июля 2014 г.

Parallelism or Concurrency

Concurrency is when two tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant. Eg. multitasking on a single-core machine.
Parallelism is when tasks literally run at the same time, eg. on a multicore processor.
Source

вторник, 15 июля 2014 г.

Plotting live data from MSP430 ADC in Python

I've finally got something for post.

Last month I've been playing with the TI MSP430 Launchpad and when I work with ADC it lacks of visualization. Since Launchpad have UART-USB interface, I decided to plot incoming data.

I'm using MSP430G2553, and all code was written for this controller.

Firmware

Firmware of controller is pretty straightforward in large scale: it just sends value from ADC to the UART continously - with one note: before start to send anything, we need to make a "handshake" - receive start symbol from computer. So, high-level algorithm will be like this:
1) Initialize UART[1] (9600 baud) and ADC[2]
2) Wait for start signal ("handshake")
3) In forever loop send temperature to UART

ADC initialization to read temperature (10 channel):
void ADC_init(void) {
    ADC10CTL0 = SREF_1 + REFON + ADC10ON + ADC10SHT_3;
    ADC10CTL1 = INCH_10 + ADC10DIV_3;
}


int getTemperatureCelsius()
{
    int t = 0;
    __delay_cycles(1000); // Not neccessary.
    ADC10CTL0 |= ENC + ADC10SC;
    while (ADC10CTL1 & BUSY);
    t = ADC10MEM;
    ADC10CTL0 &=~ ENC;
    return(int) ((t * 27069L - 18169625L) >> 16);  // magic conversion to Celsius
}


Handshake:
  // UART Handshake...
  unsigned char c;
  while ((c = uart_getc()) != '1');
  uart_puts((char *)"\nOK\n");


We're waiting for '1' and sending "OK" when we receive it.

After that, program starts to send temperature indefinitely:


  while(1) {
    uart_printf("%i\n", getTemperatureCelsius());
    P1OUT ^= 0x1;
  }


uart_printf converts integer value into string and send over UART [3].

The source code of firmware in the bottom of this post.

Plotting Application

I love matplotlib in python, it's great library to plot everything.To read data from UART, I used pySerial library. That's all we need.

When we connect launchpad to computer, device /dev/ttyACM0 is created. It's serial port which we need to use.

Application consists of two threads:
  1. Serial port processing
  2. Continous plot updating

Serial port processing

Let's define global variable data = deque(0 for _ in range(5000)), it will contain data to plot and dataP = deque(0 for _ in range(5000)), it will contain approximated values.

In the serial port thread, we need to open connection:
ser = serial.Serial('/dev/ttyACM0', 9600, timeout=1)
then, make a "handshake":
    ok = b''
    while ok.strip() != b'OK':
        ser.write(b"1")
        ok = ser.readline()
    print("Handshake OK!\n")

As you see, we're waiting the "OK" in response to "1". After "handshake", we can start reading data:

    while True:
        try:
            val = int(ser.readline().strip())
            addValue(val)
        except ValueError:
            pass

UART is not very stable, so sometimes you can receive distorted data. That's why I eat exceptions.

addValue here is function that processes data and puts it to data variable:

avg = 0
def addValue(val):
    global avg

    data.append(val)
    data.popleft()
   
    avg = avg + 0.1 * (val - avg)
    dataP.append(avg)
    dataP.popleft()


Also it calculates weighted moving average:

Continous plot updating 

First, let's create figure with two plots:

    fig, (p1, p2) = plt.subplots(2, 1)
    plot_data, = p1.plot(data, animated=True)
    plot_processed, = p2.plot(data, animated=True)
    p1.set_ylim(0, 100)  # y limits
    p2.set_ylim(0, 100)


To draw animated plot, we need to define function that will update data:

    def animate(i):
        plot_data.set_ydata(data)
        plot_data.set_xdata(range(len(data)))
        plot_processed.set_ydata(dataP)
        plot_processed.set_xdata(range(len(dataP)))
        return [plot_data, plot_processed]

    ani = animation.FuncAnimation(fig, animate, range(10000),
                                  interval=50, blit=True)

And show the plot window:
    plt.show()

Here's the result of program's work:

On the first plot it's raw data received through serial port, on the second it's average.

Source codes


Desktop live plotting application:


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import serial
import threading
from collections import deque

data = deque(0 for _ in range(5000))
dataP = deque(0 for _ in range(5000))
avg = 0

def addValue(val):
    global avg

    data.append(val)
    data.popleft()
    
    avg = avg + 0.1 * (val - avg)
    dataP.append(avg)
    dataP.popleft()


def msp430():
    print("Connecting...")
    ser = serial.Serial('/dev/ttyACM0', 9600, timeout=1)
    print("Connected!")

    # Handshake...
    ok = b''
    while ok.strip() != b'OK':
        ser.write(b"1")
        ok = ser.readline()
        print(ok.strip())
    print("Handshake OK!\n")

    while True:
        try:
            val = int(ser.readline().strip())
            addValue(val)
            print(val)
        except ValueError:
            pass


if __name__ == "__main__":
    threading.Thread(target=msp430).start()

    fig, (p1, p2) = plt.subplots(2, 1)
    plot_data, = p1.plot(data, animated=True)
    plot_processed, = p2.plot(data, animated=True)
    p1.set_ylim(0, 100)
    p2.set_ylim(0, 100)
    def animate(i):
        plot_data.set_ydata(data)
        plot_data.set_xdata(range(len(data)))
        plot_processed.set_ydata(dataP)
        plot_processed.set_xdata(range(len(dataP)))
        return [plot_data, plot_processed]

    ani = animation.FuncAnimation(fig, animate, range(10000), 
                                  interval=50, blit=True)
    plt.show()


MSP430 full firmware:

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
/*
NOTICE
Used code or got an idea from:
    UART: Stefan Wendler - http://gpio.kaltpost.de/?page_id=972
    ADC: http://indiantinker.wordpress.com/2012/12/13/tutorial-using-the-internal-temperature-sensor-on-a-msp430/
    printf: http://forum.43oh.com/topic/1289-tiny-printf-c-version/
*/

#include <msp430g2553.h>

// =========== HEADERS ===============
// UART
void uart_init(void);
void uart_set_rx_isr_ptr(void (*isr_ptr)(unsigned char c));
unsigned char uart_getc();
void uart_putc(unsigned char c);
void uart_puts(const char *str);
void uart_printf(char *, ...);
// ADC
void ADC_init(void);
// =========== /HEADERS ===============


// Trigger on received character
void uart_rx_isr(unsigned char c) {
  P1OUT ^= 0x40;
}

int main(void)
{
  WDTCTL = WDTPW + WDTHOLD;
  
  BCSCTL1 = CALBC1_8MHZ; //Set DCO to 8Mhz
  DCOCTL = CALDCO_8MHZ; //Set DCO to 8Mhz
  
  P1DIR = 0xff;
  P1OUT = 0x1;
  ADC_init();
  uart_init();
  uart_set_rx_isr_ptr(uart_rx_isr);

  __bis_SR_register(GIE); // global interrupt enable

  // UART Handshake...
  unsigned char c;
  while ((c = uart_getc()) != '1');
  uart_puts((char *)"\nOK\n");

  ADC10CTL0 |= ADC10SC;
  while(1) {
    uart_printf("%i\n", getTemperatureCelsius());
    P1OUT ^= 0x1;
  } 
}

// ========================================================
// ADC configured to read temperature
void ADC_init(void) {
    ADC10CTL0 = SREF_1 + REFON + ADC10ON + ADC10SHT_3;
    ADC10CTL1 = INCH_10 + ADC10DIV_3;
}

int getTemperatureCelsius()
{
    int t = 0;
    __delay_cycles(1000);
    ADC10CTL0 |= ENC + ADC10SC;
    while (ADC10CTL1 & BUSY);
    t = ADC10MEM;
    ADC10CTL0 &=~ ENC;
    return(int) ((t * 27069L - 18169625L) >> 16);
}


// ========================================================
// UART
#include <legacymsp430.h>

#define RXD BIT1
#define TXD BIT2

/**
* Callback handler for receive
*/
void (*uart_rx_isr_ptr)(unsigned char c);

void uart_init(void)
{
  uart_set_rx_isr_ptr(0L);

  P1SEL = RXD + TXD;
  P1SEL2 = RXD + TXD;

  UCA0CTL1 |= UCSSEL_2; //SMCLK
  //8,000,000Hz, 9600Baud, UCBRx=52, UCBRSx=0, UCBRFx=1
  UCA0BR0 = 52; //8MHz, OSC16, 9600
  UCA0BR1 = 0; //((8MHz/9600)/16) = 52.08333
  UCA0MCTL = 0x10|UCOS16; //UCBRFx=1,UCBRSx=0, UCOS16=1
  UCA0CTL1 &= ~UCSWRST; //USCI state machine
  IE2 |= UCA0RXIE; // Enable USCI_A0 RX interrupt
}

void uart_set_rx_isr_ptr(void (*isr_ptr)(unsigned char c))
{
  uart_rx_isr_ptr = isr_ptr;  
}

unsigned char uart_getc()
{
  while (!(IFG2&UCA0RXIFG)); // USCI_A0 RX buffer ready?
  return UCA0RXBUF;
}

void uart_putc(unsigned char c)
{
  while (!(IFG2&UCA0TXIFG)); // USCI_A0 TX buffer ready?
     UCA0TXBUF = c; // TX
}

void uart_puts(const char *str)
{
  while(*str) uart_putc(*str++);
}

interrupt(USCIAB0RX_VECTOR) USCI0RX_ISR(void)
{
  if(uart_rx_isr_ptr != 0L) {
   (uart_rx_isr_ptr)(UCA0RXBUF);
  }
}



// ========================================================
// UART PRINTF
#include "stdarg.h"

static const unsigned long dv[] = {
//  4294967296      // 32 bit unsigned max
    1000000000,     // +0
     100000000,     // +1
      10000000,     // +2
       1000000,     // +3
        100000,     // +4
//       65535      // 16 bit unsigned max     
         10000,     // +5
          1000,     // +6
           100,     // +7
            10,     // +8
             1,     // +9
};

static void xtoa(unsigned long x, const unsigned long *dp)
{
    char c;
    unsigned long d;
    if(x) {
        while(x < *dp) ++dp;
        do {
            d = *dp++;
            c = '0';
            while(x >= d) ++c, x -= d;
            uart_putc(c);
        } while(!(d & 1));
    } else
        uart_putc('0');
}

static void puth(unsigned n)
{
    static const char hex[16] = { '0','1','2','3','4','5','6','7','8','9','A','B','C','D','E','F'};
    uart_putc(hex[n & 15]);
}
 
void uart_printf(char *format, ...)
{
    char c;
    int i;
    long n;
    
    va_list a;
    va_start(a, format);
    while(c = *format++) {
        if(c == '%') {
            switch(c = *format++) {
                case 's':                       // String
                    uart_puts(va_arg(a, char*));
                    break;
                case 'c':                       // Char
                    uart_putc(va_arg(a, char));
                    break;
                case 'i':                       // 16 bit Integer
                case 'u':                       // 16 bit Unsigned
                    i = va_arg(a, int);
                    if(c == 'i' && i < 0) i = -i, uart_putc('-');
                    xtoa((unsigned)i, dv + 5);
                    break;
                case 'l':                       // 32 bit Long
                case 'n':                       // 32 bit uNsigned loNg
                    n = va_arg(a, long);
                    if(c == 'l' &&  n < 0) n = -n, uart_putc('-');
                    xtoa((unsigned long)n, dv);
                    break;
                case 'x':                       // 16 bit heXadecimal
                    i = va_arg(a, int);
                    puth(i >> 12);
                    puth(i >> 8);
                    puth(i >> 4);
                    puth(i);
                    break;
                case 0: return;
                default: goto bad_fmt;
            }
        } else
bad_fmt:    uart_putc(c);
    }
    va_end(a);
}



Sources

суббота, 10 мая 2014 г.

Links for Open Data / Открытые данные

Governments Open Data

Other Resources

  • API для построения маршрутов передвижения в городах, общественного транспорта, остановок и маршрутов: http://www.eway.in.ua/ua/api

Articles


среда, 30 апреля 2014 г.

Links 03-04/2014

Python

Мысли о Python 3 - http://habrahabr.ru/post/147281/

Python Idioms - http://safehammad.com/downloads/python-idioms-2014-01-16.pdf simple but

Python with Braces project - http://www.pythonb.org/ (weird =) )


Техническое

Объяснение сути Heartbleed в виде комикса - http://blogerator.ru/page/heartbleed-v-vide-komiksa-openssl-bag-oshibka

How To Get Experience Working With Large Datasets - http://www.bigfastblog.com/how-to-get-experience-working-with-large-datasets

The Apache Projects – The Justice League Of Scalability - http://www.bigfastblog.com/the-apache-projects-the-justice-league-of-scalability

Баг Шредингера (schroedinbug) - http://catb.org/jargon/html/S/schroedinbug.html

An Icehouse Sneak Peek – OpenStack Networking (Neutron) - http://redhatstackblog.redhat.com/2014/04/16/an-icehouse-sneak-peek-openstack-networking-neutron/

Художественное

Страна садов / Garden State - http://www.ex.ua/120150

День открытых дверей - http://gerasim-st.narod.ru/text/vorota.html

Разное

Новые работы у iCube - http://zyalt.livejournal.com/1042689.html

Завершено строительство самой большой в мире термальной солнечной электростанции  - http://habrahabr.ru/post/212771/

Дэниел Сиберг: Как перестать воспринимать смартфон как часть тела  - http://habrahabr.ru/company/yotadevices/blog/218537/

Are wearables just a fad? - http://pleasediscuss.com/andimann/20140403/are-wearables-just-a-fad/


понедельник, 28 апреля 2014 г.

UDF for Exponential moving average in Pig Latin


Today I faced with the fact that there's no native way to calculate moving average in Pig.

For example:

A = {(5, 1), (2, 2), (7, 3), (4, 4)}
And we need to calculate EMA of first field, with weight of second field. alpha=0.5.

ema(A) = (5*1 + 2*0.5 + 7*0.25 + 4*0.125) / (1 + 0.5 + 0.25 + 0.125) = 4.4

In Pig and Python UDF it will be like this:

REGISTER 'python_udf.py' USING jython AS myfuncs;

B = GROUP A ALL;
C = FOREACH times {
    GENERATE A as src,
             myfuncs.EMA(A, 1, 3, 0.5) as ema;
}


DUMP C;

UDF:

@outputSchema("value:double")
def EMA(D, weight_field, wmax, alpha):
    """
    Calculates exponential moving average
    note: weights are reversed!
    """
    weights = [x for x in range(1, wmax+1)]
    weights_values = {}
    wv = 1.0
    for w in weights:
        weights_values[w] = wv
        wv *= alpha
    denom = sum(weights_values.values())
    numer = 0.0
    for weight in weights:
        numer += sum(1 for x in D if x[weight_field] == weight)*weights_values[weight]
    return numer/denom


Pretty straightforward, but it works. If you know more elegant way, please share it!

вторник, 22 апреля 2014 г.

Heroku and Brunch+Ember.JS

The goal is to build static files (with brunch) while heroku deploys application, and in runtime serve it statically with dynamic backend (Python/django in my case).

To build brunch we need node.js, but for backend we need python buildpack.
heroku-buildpack-multi makes this possible to have several buildpacks simultaneously in one Heroku app.

1. We need to attach heroku-buildpack-multi to our app:

 
$ heroku config:add BUILDPACK_URL=https://github.com/ddollar/heroku-buildpack-multi.git
 
2. In your repository add .buildpacks file

Add buildpacks one in each line. Let first be node.js, and second is your backend buildpack:


$ cat .buildpacks
https://github.com/heroku/heroku-buildpack-nodejs.git#v58
https://github.com/heroku/heroku-buildpack-python.git#v36


3. Configure your node.js app with package.json
$ cat package.json
{
  "name": "dnd",
  "version": "0.0.1",
  "description": "D&D Character Generator",
  "author": "Roman Rader",
  "license": "BSD",
  "dependencies": {
    "brunch": ">= 1.7.12",
    "javascript-brunch": ">= 1.0 < 1.8",
    "css-brunch": ">= 1.0 < 1.8",
    "bower": ">= 1.2.0",
    "ember-precompiler-brunch": ">= 1.5.0",
    "less-brunch": "~1.7.1"
  },
  "scripts": {
    "postinstall": "./postinstall.sh",
  }
}


4. Here, postinstall.sh is script to install bower modules and build brunch
$ cat postinstall.sh
./node_modules/bower/bin/bower install
./node_modules/.bin/brunch build


5. Configure bower dependencies with bower.json

$ cat bower.json
{
  "name": "dnd-chargen",
  "version": "0.0.1",
  "license": "BSD",
  "private": true,
  "dependencies": {
    "ember": "1.4.0",
    "ember-data": "1.0.0-beta.5",
    "handlebars": "~1.3.0",
    "bootstrap": "3.0.x",
    "moment": "~2.5.1"
  }
}


6. And last thing, configure brunch with config.coffee

It's pretty default config file, but importrant heroku-dependent thing: 

Node.js files are in vendor/node directory on heroku side, so you have to ignore them from compilation.

javascripts->joinTo->
        'static/javascripts/vendor.js': /^bower_components|vendor\/(?!node)/

To specify directory to put compiled artifacts use public property:
public: 'public'

And all compiled stuff will be in /public directory



$ cat config.coffee
exports.config =

  files:
    javascripts:
      defaultExtension: 'js'
      joinTo:
        'static/javascripts/app.js': /^app/
        'static/javascripts/vendor.js': /^bower_components|vendor\/(?!node)/

    stylesheets:
      defaultExtension: 'css'
      joinTo: 'static/stylesheets/app.css': /^app/

    templates:
      precompile: true
      root: 'templates'
      defaultExtension: 'hbs'
      joinTo: 'static/javascripts/app.js' : /^app/
      paths:
        jquery: 'bower_components/jquery/jquery.js'
        handlebars: 'bower_components/handlebars/handlebars.js'
        ember: 'bower_components/ember/ember.js'

  modules:
    addSourceURLs: true

  paths:
    public: 'public'


Now you can try to push stuff to heroku, everything should compile fine.  
Of course, now app folder is empty (if exists), but brunch should compile well with at least bootstrap stuff.


Put your application code to /app directory (and don't try to rename it :) doesn't work, it's kinda hardcoded)

пятница, 18 апреля 2014 г.

Control Hyper-V with Python

Collection of links for writing Hyper-V script to control Hyper-V from Python

To manage Hyper-V machines there's WMI API (in my case I'll use Python WMI. There's also PowerShell version too, obviously)

When Windows 8 came out two versions of API appeared: root\virtualization and root\virtualization\v2

I think code is the best documentation for API (at least when I wrote my scripts it was much more useful when I found code examples), so without any words, without tl;dr - read code of my scripts.

Hyper-V WMI Provider Version 1

Namespace: root\virtualization
My script:
https://github.com/rrader/hue-build/blob/master/sandbox/hyperv.py

MSDN Documentation: http://msdn.microsoft.com/en-us/library/hh850319%28v=vs.85%29.aspx

Control Hyper-V VMs with Python - http://stackoverflow.com/questions/12970303/control-hyper-v-vms-with-python

Most useful source is Nova (OpenStack) driver for Hyper-V
https://github.com/openstack/nova/tree/master/nova/virt/hyperv
(all files without v2 suffix).

Hyper-V WMI Provider Version 2

Namespace: root\virtualization\v2
My script:
https://github.com/rrader/hue-build/blob/master/sandbox/hypervv2.py

Also, most useful source is Nova (OpenStack) driver for Hyper-V
https://github.com/openstack/nova/tree/master/nova/virt/hyperv
(all files with v2 suffix).

Network operations:
https://github.com/petrutlucian94/nova_dev/blob/master/nova/virt/hyperv/networkutilsv2.py

Attaching a VHD To A VM Using The Hyper-V WMI V2 Namespace - http://blogs.msdn.com/b/taylorb/archive/2013/08/12/attaching-a-vhd-to-a-vm-using-the-hyper-v-wmi-v2-namespace.aspx

понедельник, 24 февраля 2014 г.

Reinventing the wheel: "ORM" with PropertiesConfiguration

"ORM" with PropertiesConfiguration

It was interesting to try creating simple database (I needed to store different objects/tables in single key-value storage, that was temporary and quite reasonable for that situation), based on 'properties' file.

Dependencies
1) Useful gson library to convert beans to/from json (serialization)
https://code.google.com/p/google-gson/

2) PropertiesConfiguration class is just like Properties but supports auto saving and reloading from file (persistence)
http://commons.apache.org/proper/commons-configuration/apidocs/org/apache/commons/configuration/PropertiesConfiguration.html


Building ORM:

1. Let's create our key-value storage:


public class PersistentConfiguration extends PropertiesConfiguration {
    public PersistentConfiguration(String fileName) throws ConfigurationException {
        super();

        File config = new File(fileName);
        setFile(config);
        this.setAutoSave(true);
        this.setReloadingStrategy(new FileChangedReloadingStrategy());
        this.setDelimiterParsingDisabled(true);
        this.setListDelimiter((char) 0);

        if (config.exists()) {
            this.load();
        }
    }
}


It's file based storage. Since we need to store json, need to make it not treat commas as list separators.

2.

To store objects we need every object to have ID, and store last created index to avoid collisions after deleting some elements.

Let's use "<class name>:index" key name for last index, and
"<class name>:<id>" for table rows (see getIndexPropertyName and getItemPropertyName in Storage)

Our beans should implement this interface, to be sure we can access ID of row:
public interface Indexed {
    String getId();
    void setId(String id);
}


And finally, the storage:

public class Storage {
    protected final Gson gson = new Gson();
    private PersistentConfiguration config = null;

    public PigStorage() {
        try {
            config = new PersistentConfiguration("./data.properties");
        } catch (ConfigurationException e) {
            e.printStackTrace();
        }
    }

    public synchronized void store(Indexed obj) {
        String modelIndexingPropName = getIndexPropertyName(obj.getClass());

        if (obj.getId() == null) {
            int lastIndex = config.getInt(modelIndexingPropName, 0);
            lastIndex ++;
            config.setProperty(modelIndexingPropName, lastIndex);
            obj.setId(Integer.toString(lastIndex));
        }

        String modelPropName = getItemPropertyName(obj.getClass(), Integer.parseInt(obj.getId()));
        String json = gson.toJson(obj);
        config.setProperty(modelPropName, json);
    }

    public synchronized Indexed load(Class model, int id) throws ItemNotFound {
        String modelPropName = getItemPropertyName(model, id);
        if (config.containsKey(modelPropName)) {
            String json = config.getString(modelPropName);
            return (Indexed) gson.fromJson(json, model);
        } else {
            throw new ItemNotFound();
        }
    }

    public synchronized void delete(Class model, int id) {
        String modelPropName = getItemPropertyName(model, id);
        config.clearProperty(modelPropName);
    }

    public boolean exists(Class model, int id) {
        return config.containsKey(getItemPropertyName(model, id));
    }


    private String getIndexPropertyName(Class model) {
        return String.format("%s:index", model.getName());
    }

    private String getItemPropertyName(Class model, int id) {
        return String.format("%s.%d", model.getName(), id);
    }
}


Done!

Improvements:
making SELECT with filtering

    public synchronized List loadAll(Class model, FilteringStrategy filter) {
        ArrayList<Indexed> list = new ArrayList<Indexed>();
        String modelIndexingPropName = getIndexPropertyName(model);
        LOG.info(String.format("Loading all %s-s", model.getName()));
        int lastIndex = getConfig().getInt(modelIndexingPropName, 0);
        for(int i=1; i<=lastIndex; i++) {
            try {
                Indexed item = load(model, i);
                if ((filter == null) || filter.is_conform(item)) {
                    list.add(item);
                }
            } catch (ItemNotFound ignored) {
            }
        }
        return list;
    }


Where FilteringStrategy is:

public interface FilteringStrategy {
    boolean is_conform(Indexed item);
}


For example:
public class OnlyOwnersFilteringStrategy implements FilteringStrategy {
    private final String username;

    public OnlyOwnersFilteringStrategy(String username) {
        this.username = username;
    }

    @Override
    public boolean is_conform(Indexed item) {
        Owned object = (Owned) item;
        return object.getOwner().compareTo(username) == 0;
    }
}

понедельник, 10 февраля 2014 г.

Neat way to get descriptor object

(c) zaper3095

Interesting question appeared today on my job:
Quoting from StackOverflow

In Python 3
class A(object):
    attr = SomeDescriptor()
    ...
    def somewhere(self):
        # need to check is type of self.attr is SomeDescriptor()
        desc = self.__class__.__dict__[attr_name]
        return isinstance(desc, SomeDescriptor)
Is there better way to do it? I don't like this self.__class__.__dict__ stuff

Shortly, the answer is NO. No other way to get descriptor object (preventing __get__ being invoked) than getting it from __dict__.

However, there are several workarounds ;)

1) Return self in __get__ method if instance is None (it will happen if call getattr on class object like type(self))
Like this:

class SomeDescriptor():
    def __get__(self, inst, instcls):
        if inst is None:
            # instance attribute accessed on class, return self
            return self
        return 4

class A():
    attr = SomeDescriptor()
    def somewhere(self):
        attr_name = 'attr'
        desc  = getattr(type(self), attr_name)
        # desc = self.__class__.__dict__[attr_name]  # b.somewhere() would raise KeyError
        return isinstance(desc, SomeDescriptor)
2) Second solution came from my colleague Maksym Panibratenko
Since our descriptor is callable, and __get__ returns function, we can assign attribute on this function, and check in runtime is function has this attribute with hasattr()

    class SomeDescriptor():
        def __get__(self, inst, instcls):
            def func():
                pass
            func.implemented = True
            return func

    class A():
        attr = SomeDescriptor()
        def somewhere(self):
            attr_name = 'attr'
            desc  = getattr(self, attr_name)
            return hasattr(desc, 'implemented')

http://stackoverflow.com/questions/21629397/neat-way-to-get-descriptor-object
Follow link to see full discussion

вторник, 14 января 2014 г.

Metaclasses in Python (talk on SF Python)

Interesting talk by Jess Hamrick that helps to put in order knowledge about metaclasses




In questions there was one interesting note about how methods in Python are stored.
So, after instantiating class, every function becomes a method.
Method of class is object (everything is object in Python :) ) that is callable and has special useful attributes (not all listed here):
  • im_func is the function object - it's actually original function
  • im_self is the class instance object
  • im_class is the class of im_self
Obviously, more details can be found in docs http://docs.python.org/2/reference/datamodel.html.

понедельник, 6 января 2014 г.

Enabling Neutron in Devstack 2.0

Setting up devstack on vagrant machine with Docker hypervisor and Neutron enabled. In this article Neutron will be configured with Linux Bridge plugin.

My previous article about enabling neutron in devstack didn't take in account that I need to set up virtualbox machine and hypervisor will be Docker. Nova's Docker plugin doesn't work with openvswitch, so I had to turn back to linux bridge.

So, let's configure our machine:

Vagrantfile

Vagrantfile should have these lines:
 config.vm.network :private_network, ip:"172.16.0.201", :netmask => "255.255.0.0"
 config.vm.network :private_network, ip:"10.10.0.201", :netmask => "255.255.0.0"

This will configure two host-only interfaces, one for internal network (provider network), second for external (floating IPs).

localrc

should containe these lines:

# Use Docker hypervisor for Nova
VIRT_DRIVER=docker

# IP of vagrant box (and Horizon)
HOST_IP=172.16.0.201

# Networks
VLAN_INTERFACE=eth1
FLAT_INTERFACE=eth1
GUEST_INTERFACE=eth1
PUBLIC_INTERFACE=eth2
FIXED_RANGE=172.16.1.0/24
NETWORK_GATEWAY=172.16.1.1
FIXED_NETWORK_SIZE=256
FLOATING_RANGE=10.10.1.0/24

# Enable Neutron
enable_service q-svc q-agt q-dhcp q-l3 q-meta q-lbaas neutron

# Disable Cinder service
disable_service c-api c-sch c-vol

# Disable security groups
Q_USE_SECGROUP=False
LIBVIRT_FIREWALL_DRIVER=nova.virt.firewall.NoopFirewallDriver

# neutron linuxbridge
Q_PLUGIN=linuxbridge
Q_AGENT=linuxbridge

Bridge driver

Surely we don't need to replace bridge with brcompat in our case (we don't need OVS at all), so skip this step and leave as is (make sure in localrc you have Q_PLUGIN=linuxbridge)

VirtualBox configuration

Possibly you will need to configure VirtualBox host-only interfaces before spinning up vagrant,  (you can skip this step - interfaces will be created automatically), you can use this script

#!/bin/bash

# Private Network  vboxnet0 (172.16.0.0/16)
VBoxManage hostonlyif create
VBoxManage hostonlyif ipconfig vboxnet0 --ip 172.16.0.254 --netmask 255.255.0.0

# Public Network vboxnet1 (10.10.0.0/16)
VBoxManage hostonlyif create
VBoxManage hostonlyif ipconfig vboxnet1 --ip 10.10.0.1 --netmask 255.255.0.0

Final steps

Build devstack as usual (stack$ ./stack.sh)
To check all set up ok, go to Horizon interface (http://172.16.0.201 if you followed this manual) and check if your router (under demo tenant) has two interfaces, private network and gateway.

Finally, push some images to registry and spin up instances.

Note, that you will no have ability to ping/get to instances from your global namespace, use 
# ip netns
command to list all namespaces, and use
# ip netns exec <namespace id> <command>
Usually, all instances can be accessed from router namespace (qrouter-xxxxx namespace). For example, I have up instance with web server on 8000 port, on 10.10.1.2 .
To access it I can do:
# ip netns
qlbaas-1cd37d1d-a5c8-4dcc-8c78-4edb550e5159
...
7a078076c5c7dde649f53291ae7d7a9e698a262fe3225153c737b33725af40a1
...
qrouter-0588fbc8-da2e-46b0-a093-0258a702a168

# ip netns exec qrouter-0588fbc8-da2e-46b0-a093-0258a702a168 wget 10.10.1.2

Now all seems works, I still have no access to instances from external network though.