Search This Blog

Wednesday, June 17, 2009

Resizing images

Pictures accumulate. The treks, trips, parties and the infinite fun that we've had in the past one year of college has resulted in me filling up half my hard disk with photos. My dear friends have fancy cameras which produce 8 mega pixel images, each occupying almost 2.5MB of my precious hard disk space.

Since an 8 mega pix image is serious overkill, I decided to resize them all to 1600x1200 (2 mega pix, around 1MB). Now that I've turned all linuxy, I was looking for a 'Microsoft Office Picture Manager'-like application for linux when Sriram Kashyap suggested that it'd be way easier to write a shell script to do the job.

So friends, here it is :)
#!/bin/bash
pics=`ls $1 | grep JPG`
for pic in $pics
do
   echo "Converting $1$pic ..."
   convert -size 1600x1200 $1$pic -resize 1600x1200 $1$pic
done
Pass the folder containing your pics as an argument to the script, or just put the script in your folder and double-click :)

Sunday, May 3, 2009

Train, food and home

It was 6.30 in the evening when I was at the new Canara bank ATM in IIT Bombay, looking at a receipt which said "Unable to dispense cash. Transaction failed". I had exactly Rs. 260 in my wallet, and a 24hr journey ahead.

After a short auto ride to Kanjurmarg, and 20 minutes of waiting in the queue, I got a ticket to Dadar. The local train ensured that I was at Dadar in the next 20 minutes. At the entrance of platform 8, my luggage was inspected, and the policeman on duty spent almost 3 minutes playing with my Nokia 6300. He seemed to be particularly interested in the albums. I still wonder what he was looking for!

On to platform 8, and I bought a packet of chips (10 bucks, and barely 10 pieces of chips) and boarded the train. There was 1 hour before the train departed, and I busied myself with the chips and Jeffrey Archer.

As the train was reaching Kalyan, I bought a veg palao for dinner (Rs 40!), and I felt thoroughly cheated. The fact that I now had only Rs. 183 left didn't help the cause.

After 9 hours of uninterrupted sleep (even the TTE didn't turn up), I woke up near Belgaum, the land of VTU. I had 2 idlis and 2 vadas for breakfast (Rs. 20), and it was a pleasant break from the usual pav-bhaji - vada-pav of Mumbai. Even the chutney had just the right amount of spice.

By lunch time, I was in Dharwad. I must say, the menu on trains has improved considerably. I had 2 ಜೋಳದ ರೊಟ್ಟಿs, curry, rice, dal and buttermilk for lunch (Rs. 35 only). I started feeling more and more south indian :)

After 200 pages of Jeffrey Archer, the train neared Tumkur. The cool air was refreshing. For the first time in months, my shirt was not sticking to my body - I was dry! What a feeling!

Dad was waiting for me at the railway station. I wonder why home always feels sweet.

Monday, April 6, 2009

April fooled!

Last week, Prof. Om Damani announced our next assignment. We had to implement statistical machine translation, that too with exams just 15 days away. Given that he took almost around 4 classes (that's about 6 hours) to give the overview of the procedure, we simply had no clue how we would manage it.

Well, we got this email from him today.


Was it April 1st when I announced it :)
Sorry to disappoint you, but no assignment 6.
- Om

I've never ever felt this good after being fooled!

Thursday, March 19, 2009

munnabhai@cse.iitb.ac.in

Life at IITB surely rocks, and so does our department :)



Starring Adil as Munna Bhai, and Sree Shankar as Circuit.

Monday, March 9, 2009

Viterbi algorithm for second order Hidden Markov model

This post is a supplement to the Viterbi article on Wikipedia I'm posting this because I couldn't find an understandable implementation of Viterbi for second order HMM anywhere (I badly needed it for my assignment). So, Anup, Saurabh and I put our heads together and modified the Wiki article's Viterbi code to work for 2nd order HMM.

The story goes thus(from Wiki) - Two friends, Alice and Bob, who live far apart from each other and who talk together daily over the telephone about what they did that day. Bob is only interested in three activities: walking in the park, shopping, and cleaning his apartment. The choice of what to do is determined exclusively by the weather on a given day. Alice has no definite information about the weather where Bob lives, but she knows general trends. Based on what Bob tells her he did each day, Alice tries to guess what the weather must have been like.

Alice believes that the weather operates as a discrete Markov chain. There are two states, "Rainy" and "Sunny", but she cannot observe them directly, that is, they are hidden from her. On each day, there is a certain chance that Bob will perform one of the following activities, depending on the weather: "walk", "shop", or "clean". Since Bob tells Alice about his activities, those are the observations. The entire system is that of a hidden Markov model (HMM).

Alice knows the general weather trends in the area, and what Bob likes to do on average. start_probability reflects Alice's belief that it is rainy on a given day, there is a probability of 0.7 that it'll rain the next day as well.

Below is the python code (with some helpful print statements added). Feel free to copy :)


states = ('Rainy', 'Sunny')

observations = ('walk', 'shop', 'clean')

start_probability = {
'Rainy|Rainy' : 0.7,
'Rainy|Sunny' : 0.3,
'Sunny|Rainy' : 0.4,
'Sunny|Sunny' : 0.6
}

transition_probability = {
'Rainy|Rainy' : {'Rainy' : 0.8, 'Sunny' : 0.2},
'Rainy|Sunny' : {'Rainy' : 0.5, 'Sunny' : 0.5},
'Sunny|Rainy' : {'Rainy' : 0.6, 'Sunny' : 0.4},
'Sunny|Sunny' : {'Rainy' : 0.3, 'Sunny' : 0.7},
}

emission_probability = {
'Rainy' : {'walk': 0.1, 'shop': 0.4, 'clean': 0.5},
'Sunny' : {'walk': 0.6, 'shop': 0.3, 'clean': 0.1},
}

def forward_viterbi(obs, states, start_p, trans_p, emit_p):
T = {}
for state1 in states:
for state2 in states:
## prob. V. path V. prob.
T[state1+"|"+state2] = (start_p[state1+"|"+state2], [state2], start_p[state1+"|"+state2])
for output in obs:

U = {}
print "--------------------\nObservation:",output
for next_state in states:
total=0
argmax=None
valmax=0
print "Next state:"+next_state
for curr_state in states:
for prv_state in states:
print "\tprv_state|curr_state:",prv_state+"|"+curr_state
try:
(prob, v_path,v_prob)=T[prv_state+"|"+curr_state]
except KeyError:
(prob, v_path,v_prob)=T[prv_state+"|"+curr_state]=(0,None,0)

p=emit_p[curr_state][output] * trans_p[prv_state+"|"+curr_state][next_state]
prob *= p
v_prob *= p
total += prob
if v_prob > valmax:
argmax=v_path+[next_state]
valmax=v_prob
print "\t\t",v_path,v_prob
U[curr_state+"|"+next_state] = (total, argmax, valmax)
print "\targmax:",argmax,"valmax:",valmax
T=U
## apply sum/max to the final states:
total = 0
argmax = None
valmax = 0
for state1 in states:
for state2 in states:
try:
(prob, v_path, v_prob) = T[state1+"|"+state2]
except KeyError:
(prob, v_path, v_prob) = T[state1+"|"+state2]=(0,None,0)
total += prob
if v_prob > valmax:
argmax = v_path
valmax = v_prob
return (total, argmax, valmax)

def example():
return forward_viterbi(observations,
states,
start_probability,
transition_probability,
emission_probability)

res=example()
print "\nResult:",res