## Wednesday, January 10, 2018

### Capturing ADS-B packets using HackRF

A few weeks ago, I managed to setup my computer to capture ADS-B beacons transmitted from aircrafts flying over UCD. That was a fantastic experience. Unfortunately, I was woking on trying another tool while working on this and therefore, I cannot remember the exact tools I installed for this particular work. Therefore, I will write the steps I remember which may include some steps which are not required for this work. I did this work on a Kali Linux machine.

(1) Install some required packages using the apt-get command as follows.

sudo apt-get install gqrx gr-air-modes cmake g++ libpython-dev python-numpy swig hackrf libhackrf-dev

(2) Install the tool called SoapySDR which is available on the Github.

git clone https://github.com/pothosware/SoapySDR.git
mkdir build
cd build
cmake ..
make -j4
sudo make install
sudo ldconfig #needed on debian systems
SoapySDRUtil --info

(3) Install the tool called SoapyHackRF which is again available on Github.

git clone https://github.com/pothosware/SoapyHackRF.git
cd SoapyHackRF
mkdir build
cd build
cmake ..
make
sudo make install
SoapySDRUtil --probe="driver=hackrf"

(4) As a part of the two tools installed on the previous steps, we need some extra stuff which we can get using apt-get command.

sudo apt-get install soapysdr-module-uhd libuhd003.010.002 libuhd-dev

(5) Now, we need to add some UDEV rules which is explained in the following link.

(6) We are good to go now. Let's connet the HackRF to the computer and run the following command to start the ADS-B receiver.

modes_rx -s osmocom -r 10e6

It takes some time to pick some signals from an aircraft which is not very frequent around our building. Whenever an ADS-B transmission is picked from by our setup, it will be displayed on the terminal.

~*****************~

## Wednesday, December 27, 2017

### Biography of John Nash

Among various interesting places in UCD, I find the James Joyce library as a special place as I always admire good books. During my second visit to the library, I accidentally came across an interesting book which grabbed my attention. It's the biography of the famous mathematician John Forbes Nash written by Sylvia Nasar. The name of the book is "A Beautiful Mind" which is a famous name due to the famous movie with the same name and is based on this very same book. I had already watched the movie a few years ago and immediately felt that I should read the book too.

John Nash is an American mathematician who is famous for his contributions to Economics including Nash Equilibrium. He won the 1994 Nobel Prize in Economics. He suffered from a mental illness called Schizophrenia which caused him to see delusions and hear voices which were non-existent. His illness failed him to recognize reality and his own imaginations. His life story is so painful as he bounced between the reality and his delusions affecting his academic career and family life. He luckily had a wonderful group of people around him who never gave up with him till the end.

I learned so many things from the book about the life of this mathematician which were not represented in the movie. Of course, it is natural that when a movie is based on a book, so many details from the book has to be omitted from the book in order to fit the story into a short video. However, missing these details from the John Nash's life story makes it so incomplete. Therefore, I'm glad that I found the book.

Among so many things, I thought it is worth highlighting some interesting facts about this interesting man which I couldn't find in the movie. Let's start with some dark aspects.

(1) The movie tells us how Nash found Alicia, how they fall in love and build their family. The missing piece is that Nash had a love life before meeting Alicia. For a while,  Nash lived together with a girl called Eleanor and they had a son too. After their son was born, Nash refused to marry Eleanor and even refused to pay for the child support when she tried to take legal action. Nash's mother tried so hard to prevent her son from doing this terrible mistake but she failed. Nash later met a student called Alicia and got married to her. Eleanor even went to meet Alicia to prevent this marriage but still, Alicia didn't care.

(2) During the time when Nash was working for a defense research institution, he was arrested by police once with the charges of "indecent exposure" in a men's bathroom. Due to this incident, his security clearance was canceled making him unable to work on defense projects. It is still not clear whether Nash was gay. At some points, while he was a Ph.D. student, his behavior indicates like he had some kind of affection for male students. However, still, it is difficult to confirm.

(3) During the Korean wartime, US Government drafted young American men to go to war. Nash was on the list of compulsory military draft and it was clear that he has to join military very soon. He used his personal and family connections to remove his name from the list while some of his unfortunate Princeton colleagues had to go to war. Just imagine how many brilliant young men the war must have taken away. The move taken by Nash spared his life even though the way he did it was not right.

(4) Nash was so determined to win a mathematical prize somehow and for that, he was supposed to publish in an American journal. He first submitted his paper to Acta Mathematica, a prestigious Swedish mathematical journal and right after getting the acceptance with comments, he immediately withdrew the paper and submitted to American Journal of Mathematics. Swedish reviewers were so outraged by Nash's unprofessional behavior.

Having said so many negative aspects of this brilliant man, I still find amazing things in this person. Among so many other things, this is the most fascinating thing about John Nash. Even though he was suffering from the mental illness called Schizophrenia, which caused him to see delusions and hear voices, he believed in something which is even sane people fail to recognize as a wonderful idea.

Nash believed that there is an incoming Alien Invasion to the Earth. He thought that he can figure out how the Aliens are going to do it by decoding secret messages. His idea was that the whole world should unite and fight back. He tried to convince his fellows, high profile members of the government and many others about this invasion but nobody believed him. Being unable to convince others, finally, he left the US and traveled to Switzerland. There he visited the US embassy in Switzerland and attempted to rip off his passport in front of the officials. He said that he no longer an American citizen but a World Citizen. Although this idea was never accepted and he was deported back to the US, I find this an awesome idea.

Do we have to be insane to think that these divisions among human race are not going to help us in any way?

~**********~

## Wednesday, November 8, 2017

### Diving into FAT file system with a Hex Editor

In this post, I'm going to explore a disk image which contains a FAT32 partition completely using a hex editor. This exploration provides an important insight on how FAT file system works. The disk image I'm using for this is a 100MB long file which can be download from here. SHA1 hash value of the file is as follows.

d665dd4454f9b7fc91852d1ac8eb01a8f42ed6be

(1) First of all, we open this disk image using Okteta. Here's the first 512 bytes which is the first sector of the disk looks like. That means this is the Master Boot Record (MBR). We can see distinguishable features of the MBR here. The very first thing to spot is that the last two bytes located at the offset 0x01FE of this sector contains the value 0x55AA.

 First sector of the image which is the MBR
(2) In order to find the files stored in this disk image, we need to first locate the FAT partition. To locate FAT partition we have to find it by reading the partition table inside MBR properly. By looking at the structure of the MBR, we can see that partition table takes 64 bytes long area at the end of the partition right before the last two bytes signature. To make our life easier, let's just copy partition table and paste into a new tab in Okteta.

 Partition table which is 64 bytes long
(3) Once again, a glance at the structure of a partition table entry tells us that an entry in the table is 16 bytes long and the whole table is 64 bytes long. That means it can accommodate up to 4 partition entries. Empty partition entries are filled with zeros. Now, when we look at the partition table that we just extracted from our image, we can see that there's only one partition entry there. Following is that entry.

00202100 0BBE320C 00080000 00180300

(4) Let's start interpreting this partition entry. The very first byte in this entry tells us whether this partition is bootable or not. That means whether this partition contains an operating system or not. The first byte contains 0x00. That means, no this is not a bootable partition.

(5) Another useful information is what type of file system is available on this partition. That information is available in the offset 0x04 location in the partition entry. That value is 0x0B as you can see. Our reference document tells us that this value means, our partition is a FAT32 partition. Great! We know about FAT file system. So we will be able to explore this partition.

(6) Next question that arises is where is this FAT32 partition in the disk image? How to locate it? Again our reference document tells us that the offset 0x08 of the partition entry contains 4 bytes which specify the starting sector of this partition. When you look at the partition entry, you can see that this value is 00 08 00 00. So, this value represents the starting sector of the FAT32 partition. We need to interpret this number carefully. This is a number stored in little-endian format. That means, the last byte is the most significant byte. We have to reverse the order of these bytes before interpreting.

The number you get once you revert it from little-endian format is 00 00 08 00. Let's use the calculator to convert this hexadecimal number to decimal. The decimal value is 2048. That means This FAT32 partition begins at the sector number 2048. Let's go there and see this partition.

But, wait! Our hex editor shows offsets in bytes, not in sector numbers. So, we need to find the byte offset of this sector number 2048. We can easily do that by multiplying 2048 by 512 because there are 512 bytes in a sector.

2048 x 512 = 1048576

Again, we have to convert this decimal byte offset 1048576 into hexadecimal before going there. Ask help from the calculator for that too.

Now, 100000 is the byte offset in hexadecimal where you can find the FAT32 partition.

(7) Instead of scrolling down to find the above offset, we have an easy way in Okteta editor. You can go to Edit menu and select Go to Offset... option. Then, at the bottom of the Okteta window, you get a text field where you can specify where you want to go. Put 100000 there and hit Enter.

 First sector of the FAT partition which is the boot sector.
Now, we are at the FAT32 partition. It's time to take the reference document which has FAT32 related data structures to your hand. We need it from here onwards.

(8) The very first sector in the FAT32 partition is called boot sector. There are lots of useful information in this sector. Let's go through some of the important ones. It is a good idea to copy the first 48 bytes into a new tab in Okteta for our convenience. The information we are looking for are in this area.

 First 48 bytes of the boot sector in a new Okteta tab.
(9) In this boot sector, offset 0x0B contains 2 bytes which specify the number of bytes per sector. The value you can find in this location is 00 02 and as usual this is in little-endian. Convert it back to big-endian and you get 02 00. Converting this hexadecimal number to decimal gives us 512. That means, clearly 512 bytes per sector rule applies within this FAT32 partition.

(10) In this boot sector, offset 0x0D contains a byte which specify the number of sectors per each cluster. Let's see what is in that location. In our boot sector, this location contains the value 0x01. Converting from hexa to decimal give us 1 as the answer. That means, each cluster in this partition is actually a single sector. Simple enough.

(11) In this boot sector, offset 0x0E contains 2 bytes which specify the number of reserved sectors in this FAT32 partition. That means, the number of sectors between the beginning of the partition and the FAT1 table. The value in that offset gives us 20 00 which is in little-endian. In big-endian, we get the hexadecimal value 0x0020 which is 32 in decimal. That means, there are 32 sectors in the reserved area before the FAT1 table. It is important to note that these 32 sectors include the boot sector itself. In other words, boot sector is just another sector in the reserved area.

(12) In this boot sector, offset 0x10 contains a byte which specify the number of FAT tables we have in this partition. Usually there are 2 tables called FAT1 and FAT2, but it's better to see whether it's true. The value in that offset specify the value 02 in hexadecimal. In decimal, the value is 2 and that means we have two FAT tables indeed.

(13) In this boot sector, offset 0x11 contains 2 bytes which specify the maximum number of file entries available in the root directory. However that applies only to FAT12 and FAT16 versions of FAT. This partition that we are exploring is a FAT32 parition and in FAT32, the number of entries in the root directory is not specified here. So, we don't have to interpret anything here.

(14) The offset 0x16 in the boot sector has 2 bytes which specify the number of sectors in each FAT table. There's an important thing to note here. If these two bytes contain some non-zero value, we can take that number. However, if the location contains all zeros in those two bytes, that means, the space is not enough to specify the information. In that case, we have to go to the offset 0x24 and interpret 4 bytes there.

The value in the two bytes at offset 0x16 gives us 00 00 in our image. That means we have to go to 0x24 and take the 4 bytes it has. In our image, we have 18 06 00 00. Converting from little-endian gives us the value 00 00 06 18 which is 1560 in decimal. Therefore, we conclude that there are 1560 sectors in a FAT table. Since we have two FAT tables, they take up twice of that space.

(15) In this boot sector, offset 0x2C contains a byte which specify the first cluster of the root directory. This information is useful when we later recover file data. The value in that location is 0x02 which is 2 in decimal. That means, there are two clusters in this disk image before the root directory, namely cluster 0 and cluster 1. You will see how this information becomes useful later.

(16) Now, we have enough information to locate the root directory of this FAT32 partition. Here's how we calculate the location. The root directory is located right after the two FAT tables. That means, we just have to walk through the reserved area from the beginning of the partition, then through the FAT1 and FAT2 tables and there we find the root directory.

Offset to root directory = (number of sectors in reserved area) + (number of sectors in a FAT table) x 2
= 32 + (1560)x2
= 3152 sectors.
= 3152 x 512 bytes
= 1613824 bytes
= 0x18A000 bytes (in hex)

There's a tricky thing here. This offset specifies the location from the beginning of the partition. Unfortunately, we are dealing with an entire disk image. The FAT32 partition starts at 0x100000 bytes location as we found previously by looking at the MBR. Therefore,

Offset to the root directory (in our disk image) = 0x100000 + 0x18A000 = 0x28A000 bytes.

Let's jump to this offset in Okteta and see what we find there.

 Root directory of the FAT partition in a new Okteta tab.
Yup, this location indeed looks like the root directory.

(17) Root directory contains entries which are 32 bytes long. At first glance, you can see that there are 6 entries in this root directory. For our convenience, let's copy the whole root directory to a new tab in Okteta.

(18) Now our reference document tells are which bytes in a directory entry specifies which information. The first byte of a root directory entry is important. If a file is deleted, the first byte is simply set to 0xE5. Now you can identify 3 entries which has the first byte set to 0xE5 and therefore simply deleted files. I'm going to pick just one file from this root directory and explore it. It's up to you to deal with remaining files.

(19) I'm selecting the entry at the offset 0x000000A0 to deal with. It's the last entry in our root directory. According to our reference document, first 11 bytes of a directory entry contains file name. A dot (.) is implied between the two bytes at locations 0x07 and 0x08. The values in those 11 bytes in our directory entry are as follows.

4E 4F 54 45 53 20 20 20 54 58 54

We simply have to convert each byte into the relevant ASCII character using an ASCII code chart.

NOTES.TXT

(20) In this root directory entry, the offset 0x0E contains 2 bytes which represent the file creation time.  The value in that location is 95 A4 in little-endian. We convert it to big-endian to get A4 95 which is 1010010010010101 in binary. From this number, first 5 bits represent the hour. Next 6 bits  represent the minute and the last 5 bits represent the seconds divided by 2.

Hour: 10100 = 20
Minute: 100100 = 36
Second: 10101 = 21 -> 21x2 = 42

Therefore, creation time = 20:36:42

(21) In this root directory entry, the offset 0x10 contains 2 bytes which represent the file creation date.  The value in that location is 66 4B in little-endian. We convert it to big-endian to get 4B 66 which is 0100101101100110 in binary. From this number, first 7 bits represent the year since 1980. Next 4 bits  represent the month and the last 5 bits represent the day of the month.

Year: 0100101 =  37 -> 1980+37 = 2017
Month: 1011 = 11
Day: 6

Therefore, creation date: 2017/11/06

(22) In this root directory entry, the offset 0x12 contains 2 bytes which represent the file accessed date. The value in that location is 66 4B in little-endian. That is same as the file creation date which we calculated previously. So, the accessed date is same as the created date for this file.

(23) In this root directory entry, the offset 0x16 contains 2 bytes which represent the file modified time. The value in that location is 95 A4 in little-endian. That value is similar to the file creation time. so, no need to calculate it again in this case.

(24) In this root directory entry, the offset 0x18 contains 2 bytes which represent the file modified date. The value in that location is 66 4B in little-endian. That value is similar to the file creation date. so, no need to calculate it again in this case.

(25) Now we are ready to process the the contents of the file. The location of the first cluster that belongs to this file contents is given in two fields of the root directory entry.  The offset 0x14 gives the high-order 2 bytes. The offset 0x1A gives the low-order 2 bytes. Remeber that each value is in little-endian.

High order value: 00 00 -> little-endian to big-endian -> 00 00
Low-order value: 08 00 -> little-endian to big-endian -> 00 08

Cluster number = 00 00 00 08 = 8 (decimal)

Therefore, the first cluster where our file contents can be found is cluster 8.

Calculating the byte offset to the cluster number 8 is again bit tricky. This is how we handle it. In the point (15), we found that there are 2 clusters before the root directory.  That means from the root directory to the file data location, there are 6 clusters because 8-2 = 6. Therefore, we can calculate to the byte offset to this file location  from the root directory as follows. We simply multiply 6 clusters by the number of sectors per cluster (which is 1) and by number of bytes per sector (which is 512).

byte offset from the root directory to the file data = 6 x 1 x 512 = 3072 (decimal) = 0xC00

Now, if we add this offset to the offset of the root directory from the beginning of the disk image, we get the location of the file data exactly from the beginning of the disk image.

absolute offset to the file data =  (byte offset to the root directory) + 0xC00
=  0x28A000 + 0xC00
= 0x28AC00

Now, let's goto this offset and you will see the file contents.

That's all folks!

~*************~

## Wednesday, November 1, 2017

### Creating a Raw Image File, Partition and Format It

When playing with file system related stuff, especially for studying how they work at the low level with a hex editor, we are in need of many disk images. In such situations, instead of acquiring real disk images, it is possible to artificially create disk images on demand with any number of partitions we want with different file system times with a custom size. In this blog post, I'm writing down the steps to follow, in order to create such a disk image with the partition table and a single FAT32 partition.

(1) Creating a 100MB raw file.

dd if=/dev/zero of=image.dd iflag=fullblock bs=1M count=100 && sync

(2) Mounting the blank image into a loop device.

sudo losetup loop0 image.dd

Now, if you run the command losetup, you should see an output where loop device loop0 is mounted with the image.

(3) Let's partition this new loop device using GParted tool. For that, first we should install it.

sudo apt-get install gparted

(4) Open the GParted tool using the following command. Follow the steps of the screenshots in order to create the partition table and a FAT32 partition.

sudo -H gparted /dev/loop0

 GParted window.

 Creating a partition using the "Device" menu.

 Select partition type "msdos" and apply.

 Our drive with a partition table but no partitions yet.

 Creating a partition using the "Partition" menu.

 Select "File system" type as fat32 and click add.

 Newly created partition. Size is smaller because of the partition table, etc.

 Click on the button to apply file system creation operation to the drive.

 All done. Click "close" to finish and close the GParted window.

(5) Unmount the loop device.

sudo losetup -d /dev/loop0

Now, our image.dd file contains a partition table of msdos type and a single partition with FAT32 file system. We can check it using a command available on Sleuthkit as follows.

sudo apt-get install sleuthkit
mmls image.dd

~************~

## Thursday, October 26, 2017

### 3. Notes on Machine Learning: Basics of Neural Networks

Neural Networks is an interesting branch in machine learning which attempts to mimic the functionality of neurons in human brain. A neural network consists of the input feature vector $$X$$ to a node, the hypothesis function which is sometimes called the activation function running inside a node and finally the output of the function. Instead of having a single activation unit, we can have multiple layers of activation nodes. The input vector layer is considered as the first layer while there are multiple hidden layers (layer 2, layer 3, etc) before the output layer.

The $$\theta$$ parameter set is not a single vector like in linear regression and logistic regression in this case. This time, in neural networks, we have a  $$\theta$$ parameter set between every two layers. For example in the above figure, we have three layers, and therefore, we have two $$\theta$$ sets. The arrows going from layer 1 to layer 2 represent the parameter set $$\theta^{(1)}$$. The arrows going from layer 2 to layer 3 represent the parameter set $$\theta^{(2)}$$. The upperscript number within the brackets represent the origin layer this parameter set belongs to. Furthermore, $$\theta^{(1)}$$ is a matrix with 3x4 dimentions. There, every raw represents the set of arriows coming from the layer 1 features to a node in layer 2. For example, the element $$\theta^{(1)}_{10}$$ represents the arrow to $$a_1^{(2)}$$ from $$x_0$$. The element $$\theta^{(1)}_{20}$$ represents the arrow to $$a_2^{(2)}$$ from $$x_0$$.

$$\theta^{(1)} = \begin{bmatrix}\theta^{(1)}_{10} & \theta^{(1)}_{11} & \theta^{(1)}_{12} & \theta^{(1)}_{13}\\\theta^{(1)}_{20} & \theta^{(1)}_{21} & \theta^{(1)}_{22} & \theta^{(1)}_{23}\\\theta^{(1)}_{30} & \theta^{(1)}_{31} & \theta^{(1)}_{32} & \theta^{(1)}_{33}\end{bmatrix}$$

Meanwhile $$\theta^{(2)}$$ is a raw vetor (1x4) in this case. This is because there are 4 arrows coming from the layer 2 nodes to the layer 1 node.

$$\theta^{(2)} = \begin{bmatrix}\theta^{(2)}_{10} & \theta^{(2)}_{11} & \theta^{(2)}_{12} & \theta^{(2)}_{13}\end{bmatrix}$$

The hypothesis function in thse neural networks is a logistic function just like in the logistic regression.
$$h_\theta (x) = \frac{1}{1 + e^{- \theta^T x}}$$
For a neural network like the one shown in the above figure, we can calculate the activation and get the final output inthe following way. There, $$a_1^{(2)}$$ represent the activation node 1 in the layer 2 (hidden layer). Similarly $$a_2^{(2)}$$ represent the activation node 2 in the layer 2 and so on.

$$a_1^{(2)} = g(\theta_{10}^{(1)} x_{0} + \theta_{11}^{(1)} x_{1} + \theta_{12}^{(1)} x_{2} + \theta_{13}^{(1)} x_{3})$$

$$a_2^{(2)} = g(\theta_{20}^{(1)} x_{0} + \theta_{21}^{(1)} x_{1} + \theta_{22}^{(1)} x_{2} + \theta_{23}^{(1)} x_{3})$$

$$a_3^{(2)} = g(\theta_{30}^{(1)} x_{0} + \theta_{31}^{(1)} x_{1} + \theta_{32}^{(1)} x_{2} + \theta_{33}^{(1)} x_{3})$$

$$h_\theta (x) = a_1^{(3)} = g(\theta_{10}^{(2)} a_{0}^{(2)} + \theta_{11}^{(2)} a_{1}^{(2)} + \theta_{12}^{(2)} a_{2}^{(2)} + \theta_{13}^{(2)} a_{3}^{(2)})$$
Since the hypothesis function is a logistic function, the final output we get is a value between 0 and 1. What we do to have a multiclass classifier is, having multiple nodes in the output layer. Then, we get a unique output value from each node in the output layer representing a specific class.

~************~

## Tuesday, October 17, 2017

### 2. Notes on Machine Learning: Logistic Regression

Unlike regression problems where our objective is to predect a scalar value for a given set of values in features, in classification problems our objective is to decide a discrete value from a limited set of choices. Therefore, we are not going to use Linear Regression in classification problems. Based on how many categories we have to classify our input, we have two types of problems; binary classification and multiclass classification.

In binary classification problems, there are only two possible outputs for the classifier; 0 or 1. Then as it is obvious, multi-class classification problems can have multiple output values such as 0, 1, 2, .. however there's a descete set of values, not infinite. When solving these classification problems, we develop machine learning models which can solve binary classification problems. When we have multiple classes in a problem, we use multiple binary classification models to check for each class.

Logistic Regression:

In order to maintain the prediction output between 0 and 1, we are using a zigmoid/logistic function as the hypothesis function. Therefore, we call this technique, Logistic Regression. Hypothesis function of the logistic regression is as follows.
$$h_\theta (x)=\frac{1}{1+e^{-\theta^T x}}$$
The vectorized representation of this hypothesis would look like the following.
$$h=g(X\theta)$$
For any input value in $$x$$, this logistic regression hypothesis function outputs a value between 0 and 1. If the value is closer to 1, we can consider it as a classification to the class 1. Similarly, if the value is closer to 0, we can consider the classification as 0. On the other hand, we can consider the output value between 0 and 1 as a percentage probability to classify as the class 1. For example, if we receive the output 0.6, it means there's a $$60\%$$ probability for the input data to be classified to class 1. Similarly, if we receive the output as 0.3, it means there's a $$30\%$$ probability for the input data to be classified to class 1.

Cost Function:

The cost function of the logistic regression is different from the cost function of the linear regression. This cost function is designed in a way where if the logistic regression model makes a prediction with a $$100\%$$ accuracy, it generates a zero cost penalty while if it maks aprediction with a $$0\%$$ accuracy, it generates an infinite penalty value.
$$J(\theta)=-\frac{1}{m}\sum_{i=1}^{m} [y^i log(h_\theta(x^i)) + (1-y^i) log(1-h_\theta(x^i))]$$
A vectorized implementation of the cost function would look like the following.
$$J(\theta)=\frac{1}{m}(-y^Tlog(h) - (1-y^T)log(1-h))$$

In order to adjust the parameter vector $$\theta$$ untill it fits properly to the training data set, we need to perform gradient decent algorithm. Following line has to be repeated for each $$j$$ simultaneuously which represent the parameters $$\theta$$. In this gradient decent algorithm, $$\alpha$$ is the learning rate.
$$\theta_j=\theta_j - \frac{\alpha}{m}\sum_{i=1}^{m}(h_\theta(x^i) - y^i)x_j^i$$
A vectorized implementation of the gradient decent algorithm would look like the following.
$$\theta = \theta - \frac{\alpha}{m}X^T(g(X\theta) - y)$$

Multiclass Classification:

The above technique works for classifying data into two classes. When we encounter a multiclass classification problem, we should train a logistic regression model for each class. For example, if there are 3 classes, we need three logistic regression models trained to distinguish between the targetted class and the others. In this way, we can solve  multiclass classification problems using logistic regression.

Overfitting and Underfitting:

Overfitting and Underfitting are two problems which can occur in both Linear Regression and Logistic Regression algorithms. The former problem occurs when our model fits too accurately to the training data set so that in does not represent the general case properly. The latter issue occurs when our model does not even properly fit to the training data set.

Regularization is a nice technique to solve the problem of overfitting. What happens there is we maintain the values of $$\theta$$ parameter vector in a smaller range in order to stop the learning model curve from adjusting too agressively. This is achieved by adding extra weights to the cost function. It prevents the medel from overfitting to the training dataset. We can use this technique in both linear regression and logistic regression.

Regularized Linear Regression:

The gradient decent algorithm for the linear regression after adding regularization looks like the following. The two steps has to be repeated in each step of the gradient decent. There, $$j$$ stands for 1, 2, 3, .. which represent each $$\theta$$ parameter in the hypothesis. $$\lambda$$ is the regularization parameter.
$$\theta_0 = \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m}(h_\theta(x^i) - y^i) x_0^i$$
$$\theta_j = \theta_j - \alpha [(\frac{1}{m} \sum_{i=1}^{m}(h_\theta(x^i) - y^i) x_j^i) + \frac{\lambda}{m} \theta_j]$$

Regularized Logistic Regression:

The gradient decent algorithm for the logistic regression after adding regularization looks like the following.
$$\theta_0 = \theta_0 - \alpha \frac{1}{m} \sum_{i=1}^{m}(h_\theta(x^i) - y^i) x_0^i$$
$$\theta_j = \theta_j - \alpha [(\frac{1}{m} \sum_{i=1}^{m}(h_\theta(x^i) - y^i) x_j^i) + \frac{\lambda}{m} \theta_j]$$
Of course, it looks like the regularized linear regression. However, it is important to remember that in this case, the hypotheis function $$h_\theta (x)$$ is a logistic function unlike in the linear regression.

~**********~

## Thursday, October 12, 2017

### Ireland's Space Week in 2017

 Image credit: @Lana_Salmon
Last week in Ireland is called Space Week where the focus was on promoting space exploration and related science and technologies among people. This time for Ireland is so special because they are building a cube satellite with the joint contribution of universities and Irish space technology companies. During this space week, there were so many events organized by different institutions all over the Ireland. Even though I was busy with my work, somehow I finally managed to attend to an interesting event organized by the University College Dublin.

The event was titled Designing for Extreme Human Performance in Space which was conducted by two very interesting personalities. The first person was Dava J. Newman, who is a former deputy administrator of NASA and currently works for the MIT. The second person was Guillermo Trotti, who is a professional architect and has worked for NASA on interesting projects. Seeing the profiles of these two speakers attracted me to attend to the event. The session was held for about an hour and a half where the two speakers shared the time to talk on two different areas they are interested in. Finally, the session was concluded with a Q&A session.

 Image credit: @ASayakkara
In her presenation, Dava talked about the extreme conditions in space which raise the requirement of designing life support systems to assist astronauts. When she asked from the famous astronaut Scott Kelly (@StationCDRKelly), who spent a year in ISS, about what would be the most needed thing if we are to improve in space technology, he has responded that life support systems to ease the operation of astronauts on space is the most needed thing. Dana presented the work she is involved in designing a new kind of space suit for astronauts to use on other planets such as Mars. The pictures she showed indicates a skin-tight suit which is custom designed to the body specification of an astronaut very much like a suit from a sci-fi movie.

Gui Trotti in his presentation talked specifically about his architectural interest on building habitable structures for humans on the Moon and Mars. As a professional architect, he is so inspired to bring his skills into human colonies in outer space. During that presentation, his mentioned three things that inspired me so much. The first is the fact that when an astronaut goes to space and turn back to look at his home planet, all the borders and nationalistic pride goes away and comes the feeling of we all are one human race and that planet Earth is the only home we have. Secondly, he described his tour around the world in a sailing boat which reminded him that space exploration is another form of human courage to explore and see the world. Finally, he said that his dream is to build a university on the moon one day to enable students from the Earth to visit and do research appreciating our home planet.

During the Q&A session, a lot of people asked interesting questions. Among those, one question was about the commercialization of space. They responded with the important fact that there is a potential of performing commercial activities such as manufacturing on space, especially the things which can be done easily on zero gravity environments rather than on the surface of the Earth. Various things such as growing food plants and 3D printing have been tried on the ISS towards this direction. In the near future such as a decade along the line, we would be able to see much more activities from the private sector on space than today. They are so positive about the progress in this area.

Even though I'm not working in anywhere related to space exploration,  I'm always fascinated by this topic and I will continue to be.

~*********~