neutrofoton

Science, Technology and Life

Math and Computation in Neural Network

| Comments

An Artificial Neural Network (ANN) is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates[1]. Neuron typically has a weight that adjusts as learning proceeds. This article focuses on the basic mathematical and computation process in the ANN.

Artificial Neuron

An artificial neuron is a mathematical function that models biological neurons. A Neuron can receive input from other neurons. The neuron inputs are weighted with and summed () them before being passed through into an activation function. Figure 1 shows the structure of an artificial neuron.

Figure 1: Artificial Neuron


ANN is a supervised learning that its operation involves a dataset. The dataset is labeled and split into two part at least, namely training and testing. We have expected output (from the label) for each input. During the training process, the parameter weights and biases are updated in order to achieve a result that is close to the labeled data. To identify model performance, we evaluate the model to the testing dataset to verify how good the trained model is. To get more detail how ANN learn, let begin with the last two neurons connection.

Figure 2: Simple Two Neuron Connection

is a neuron with an activation
is an index of layer. It is not an exponential.

In the training phase, the sample data is fed through the ANN. The outcome of the ANN is inspected and compared to the expected result (the label). The difference between the outcome and expected result is called Cost/Error/Loss. There are several cost functions can be used to evaluate the cost in analyzing model performance. One of the common ones is the quadratic cost function.

The cost for the network in Figure 2 in form of quadratic cost function is:

Eq (1)

The total cost of a feed through the ANN are:

Eq (2)

Equation (1) and (2) show that the closer the outcome of ANN to the expected result, the smaller the cost will be. The cost can be thought as a mathematical function of weights and biases . Hence, the fittest model can be achieved by minimizing the cost to find the suitable weights and biases.

Gradient Decent

Figure 3: Gradient vector in the direction of vector U


In multivariable calculus, gradient of a function presented as . Gradient of a function in the direction of vector in Figure 3 expressed as:

Eq (3)

If is a unit vector, the . Thus, the Equation (3) can be written as

Eq (4)

The maximum result of Equation (4) gained at . Since the is a unit vector, it can be evaluated from as



The minimum slope / steepest descent is gained at the opposite of the steepest ascent (at )



Since ANN is essentially minimizing the cost function, the gradient descent is used as basic idea to find the best fit ANN parameters (weights and biases) in the learning process.

Eq (5)

The same method also implemented to get bias update.

Eq (6)

is a learning rate. It is an additional value added how much the gradient vector we use to change the current value of weights and biases to the new ones. If is too small, adjustment of weight will slow. The convergency to local minimum will longer. However, if it is too high, the search of local minimum might be oscillated or reach overshoot. Ilustration of gradient descent in 3 dimention plot ilustrated in the Figure 4. Of course it is hard to plot the gradient descent that cover all weights of the ANN.

Figure 4: Gradient descent ilustration


Iterative training will let the cost function gradient move in the direction of green line to the local minimum as ilustrated in Figure 4. In fact, gaining global minimum is the ideal condition. However it is very difficult since we start the movement from a random location (random weight at initial training). Reaching local minimum is acceptable in finding the optimal model. If we want a better model, we can retrain the ANN by generating new random or setting different initial values of weights and biases.

Mathematical Notion

Previously has been discussed that the basic idea to find the optimal model is the gradient of a multivariable function. The topic of the gradient in calculus cannot be separated from the discussion of function theory and derivative. Since an ANN can have several layers, the chaining function and chaining derivative are needed in analyzing its mathematical notion. The 3Blue1Brown team has a good illustration in describing chain function and derivative in ANN graphically. This article adopts the illustration in describing the chaining rule.

By using the neuron topology in Figure 1, we can identify that:

Eq (7)

Let define to simplify mathematical expression.

Eq (8)

Thus,

Eq (9)

Correlation between variables described graphically as:

Figure 5: A cost chaining rule in a layer


The graphical correlation can be extended to the previous neuron.

Figure 6: A cost chaining rule in two layers


The sensitivity of cost function to the change of weight is expressed as:

Eq (10)

From Equation (1) and (10)

The coeficient of 2 indicates that deviation between and significantly gives impact to the

From Equation (9) and (10)

From Equation (8) and (10)

So mathematical expression in Equation (10) can be written as:

Eq (11)

Since the cost can be thought as a function of weights and biases, the gradient of cost function of each training can be expressed in partial derivative of all weightes and biases. Thus we can present it as a vector matrix.

Eq (12)

The sensitivity of cost function to weight can be extended to analyze sensitivity of cost function to bias.

Eq (13)

From Equation (8):

Thus,




To get a more comprehensive neuron connection, Figure 7 denotes ANN with a more detailed subscript and superscript that show index neuron order and layer.

Figure 7: ANN with indexed neurons


So that,

Figure 7 shows that impacts the value of and . Thus, the rate of changes to is evaluated as (sum over layer L):

Eq (14)

Generally, Equation (11) can be written in a fully indexed notation as:

Eq (15)

Component of Equation (15) can be evaluated using the same approach as Equation (14)

If the cost function defined as , then

Numerical Computation

To complement the explanation about neural networks, in this post we will use an example provided by Tobias Hill with a slight modification in notation to meet our notation convention in the previous chapter. The neural network structure showed in Figure 8.

Figure 8: Neural network with numeric attributes


The activation function we use is sigmoid, and the learning rate of

Eq (16)
Eq (17)

The cost function of the ANN is evaluated with Equation (1).

Feed Forward

First of all, Let’s evaluate output of neuron by implmenting sigmoid activation fuction as showed in Equation (16).





The result and expected values of the ANN can expressed as matrix vector.


The total cost of first feed to the ANN is evaluated with Equation (2):




Equation (2) shows that is essentially a function of or

Back Propagation

Weight of








Thus,




By using the same method, we can evaluate the update the other weights and biases in the last layer.



To update bias, we use the similar way as updating weight.



Weight of

















References

  1. Neural Network, James Chen
  2. Gradient Descent and Backpropagation, Tobias Hill
  3. How to Code a Neural Network with Backpropagation In Python (from scratch), Jason Brownlee
  4. Backpropagation calculus-Deep learning, 3Blue1Brown

Domain Driven Design Short Summary

| Comments

I believe there are tons of articles and books out there that discuss about Domain Driven Design (DDD for short). This post essentially is taken from my note archieved when I explored about DDD from several resources. I rewrite it in here as a refresh for myself mainly.

Domain Driven Design (DDD) is the concept that the structure and language of software code (class names, class methods, class variables) should match the business domain[1]. We don’t need to kill a mosquito with a cannon. We need to choose the fittest method in solving a problem. As well as DDD, it is not always be a fit method in solving application design. A software application has several attributes. Some of them deal with[3]:

  • Amount of Data
  • Performance
  • Business Logic complexity
  • Tehnical Complexity.

From the fourth software attributes, DDD is the most suitable with an application that has a complex business logic. DDD is designed to tackle the complexity of business rules. The main goals of DDD are[1]:

  • Placing the project’s primary focus on the core domain and domain logic;
  • Basing complex designs on a model of the domain;
  • Initiating a creative collaboration between technical and domain experts to iteratively refine a conceptual model that addresses particular domain problems.

One of good resources about DDD is DDD course by Vladimir Khorikov. I also cited a few/snippet code from his course to make my summary about DDD more clearer to understand.

Terms in DDD

Ubiquitous Languange

Ubiquitous Languanges brides the gab between developer and business expert/ domain expert/ subject matter expert (SME). The Ubiquitous Languange notion come up to avoid misunderstanding between them. For example, the developer has a class Product that represent business term Product (both product and package). On the other hand the business expert treat Product and Package as different things. By the condition, it is needed the same languange to avoid misunderstanding.

Bounded Context

Bounded Context is a central pattern in DDD. It is the focus of DDD’s strategic design section which is all about dealing with large models and teams. DDD deals with large models by dividing them into different Bounded Contexts and being explicit about their interrelationships [Martin Fowler][8].

Bounded Context notion comes up to make clear boundries between different parts of the system. Let’s say our system consist of Sales and Support, we can seperate Product into Sales and Support context to make clear boundries between the two.

Figure 1: Bounded Contexts
Source: https://martinfowler.com/bliki/BoundedContext.html

Figure 1 shows that Bounded Context aims for separating the model and explicitly drawn the boundaries between pieces. The reason for context separation is that as the application grows, it becomes harder to maintain a single unified model as it becomes larger and more people get involved in the development process.

Sub Domain

The difference between Sub Domain and Bounded Context is that the Sub Domain is a Problem Space. Meanwhile Bounded Context is a Solution Space. Sub Domain and Bounded Context have a 1-to-1 relation. It means that a Sub Domain should be covered with exactly a Bounded Context.

Figure 2: Sub Domain - Bounded Contexts

Since Sub Domain is a Problem Space, it should be defined by Business Expert or Domain Experts. The Sub Domain topic usually will come up when we have a talk with the Domain Expert while gathering requirement.

Core Domain

The notion that focuses on Domain Model which is the most important part of the system.

Onion Architecture

The implementation of DDD in onion architecture[3][5][6] showed in the Figure 3.

Figure 3: DDD in onion architecture

Figure 3 shows that the core of the architecture is the Domain Model. The Domain Model is isolated from others for seperation of consern purpose. It consists of Entities, Value Objects, Domain Events, and Agregates. They are most important part in DDD since they represent business logic implementation.

The outer layer in the onion architecture depends on the inner one. The inner layer is isolated from the outer layer that make the inner layer does not know the outer one. Figure 3 denotes that the Domain Model does not know how it is peristed to database. Since persisting model to database is handled by Repository on the outer layer.

Testing in TDD

Unit Test is one of important part in software development. Test coverage that reaches 100% has high effort. In practice the most important that should cover most unit test is the code base (Domain Model) at the inner most layer in the onion architecture. The closer to 100%, the better the Unit Test coverage will be. Meanwhile for the outer layer, the test schenario could be coveraged by integration test.

Entity vs Value Object

The difference between Entity and Value Object can be identified from several ways:

  • Type of Equality
  • Immutability
  • Lifespan

1. Type of equality

Type Equality is applied to the objects which have the same type. It is classified into 3 categories:

  • Identifier Equality
    Two objects A and B are identified have Identifier Equality if they have the same Id.

  • Reference Equality
    Two objects A and B are identified have Reference Equality if they point to the same memory address.

  • Structural Equality
    Two objects A and B are identified have Structural Equality if all the member values are matched.

The 3 types of Type Equality is summarized in the following Figure.

Figure 4: Type Equality classification

Identifier Equality usually referes to Entity. Meanwhile Structural Equality refers to Value Object. The Reference Equality can be applied to Entity or Value Object. In practice, Value Object does not has an Id.

2. Immutability

The characteristic of Entity:

  • Having identity Id
  • Mutable

The characteristic of Value Object:

  • Having no identity Id
  • Immutable

3. Lifespan

From the lifespan point of view, Value Object can not live by its own. It can be owned by one or several identities. For example Address object can not stay by its own. It should be belonged to Person object. From persistance perspective, Value Object does not have its own table in database.

How to Recognize Entity or Value Object

It is not always clear having specific characteristic if a term or notion in a business process is an Entity or Value Object. It depends on the business process itself. A good approach to identify whether a notion of object is an Entity or Value Object is by comparing to integer[3].

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
//Integer
public void MethodA(){
    int value1 = 5;
    int value2 = 5;
}

//Value Object
public void MethodB(){

    //money1 and money2 essentially the same. 
    //since they have the same nominal in business process context.
    //programmatically they both have the same structure equality
    Money money1 = new Money(5);
    Money money2 = new Money(5);
}

Ideally most business logic elements are identified as Value Objects. The Entity act as a wrapper of Value Object(s). However, don’t hesitate to refactor Value Object to Entity or vise versa if we identify it should be.

Entity Base Class
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
public abstract class Entity
{
    public virtual long Id { get; protected set; }

    public override bool Equals(object obj)
    {
        var other = obj as Entity;

        if (ReferenceEquals(other, null))
            return false;

        if (ReferenceEquals(this, other))
            return true;

        if (GetRealType() != other.GetRealType())
            return false;

        if (Id == 0 || other.Id == 0)
            return false;

        return Id == other.Id;
    }

    public static bool operator ==(Entity a, Entity b)
    {
        if (ReferenceEquals(a, null) && ReferenceEquals(b, null))
            return true;

        if (ReferenceEquals(a, null) || ReferenceEquals(b, null))
            return false;

        return a.Equals(b);
    }

    public static bool operator !=(Entity a, Entity b)
    {
        return !(a == b);
    }

    public override int GetHashCode()
    {
        return (GetRealType().ToString() + Id).GetHashCode();
    }

    private Type GetRealType()
    {
        return NHibernateProxyHelper.GetClassWithoutInitializingProxy(this);
    }
}

Value Object base class
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
public abstract class ValueObject<T>
    where T : ValueObject<T>
{
    public override bool Equals(object obj)
    {
        var valueObject = obj as T;

        if (ReferenceEquals(valueObject, null))
            return false;

        return EqualsCore(valueObject);
    }


    protected abstract bool EqualsCore(T other);


    public override int GetHashCode()
    {
        return GetHashCodeCore();
    }


    protected abstract int GetHashCodeCore();


    public static bool operator ==(ValueObject<T> a, ValueObject<T> b)
    {
        if (ReferenceEquals(a, null) && ReferenceEquals(b, null))
            return true;

        if (ReferenceEquals(a, null) || ReferenceEquals(b, null))
            return false;

        return a.Equals(b);
    }


    public static bool operator !=(ValueObject<T> a, ValueObject<T> b)
    {
        return !(a == b);
    }
}

Agregate

Aggregate is a pattern in DDD. It is a cluster of domain objects that can be treated as a single unit[7]. Aggregate is an encapsulation of Entities and/or Value Objects (domain objects). An Entity can belong to a single Agregate only. Meanwhile a Value Object can belong to multiple Agregates.

An Agregate contains a set of operations which those domain objects can be operated on. An Agregate also act as a single operation unit. Application layer should reload it from database, then perform action and store it back as a single object. Hence, the Agregate should not be too large. Commonly it contains maximum 3 Entities. On the contrary to Entity, we can have as many Value Objects in an Agregate as we want.

Agregate Root
1
2
3
4
public abstract class AggregateRoot : Entity
{

}

Repository for Database Operation
1
2
3
4
5
6
7
8
9
10
11
12
public abstract class Repository<T> where T : AggregateRoot
{
    public T GetById(long id)
    {
        throw new NotImplementedException("load data from database");
    }

    public void Save(T aggregateRoot)
    {
        throw new NotImplementedException("operation insert to database");
    }
}

Domain Event

Domain Event represents an event that’s significant to the Domain Model. It’s important to distinguish Domain Event to System Event. System Event deals with infrastructure event such as button click, timer tick, window close etc. On the other hand, Domain Event describe occasion which is important to the Domain. For example, when a button clicked (System Event), it calls a domain operation. Domain operation trigger an event to change balance values in other Bounded Context for example head office.

Domain Event is often used to[3]:

  • Decouple Bounded Context
  • Facilitate communication between Bounded Context
  • Decouple Entities within a Bounded Context

The guidelines in implementing Domain Event are[3]:

  • Naming should be in Past Tense. Example: BalanceChangedEvent
  • Passing data as small/little as possible. Don’t pass data/information more than needed.
  • We should not pass Entity to an Event. Since it will produce additonal point of coupling. We should use primitive data type instead.

References

  1. https://en.wikipedia.org/wiki/Domain-driven_design
  2. Domain Driven Design: Tackling Complexity in the Heart of Software
  3. https://www.pluralsight.com/courses/domain-driven-design-in-practice
  4. https://www.baeldung.com/spring-data-ddd
  5. https://www.baeldung.com/spring-boot-clean-architecture
  6. https://jeffreypalermo.com/2008/07/the-onion-architecture-part-1/
  7. https://martinfowler.com/bliki/DDD_Aggregate.html
  8. https://martinfowler.com/bliki/BoundedContext.html

Generate POJO and Hibernate Mapping Using Hibernate Tools

| Comments

Sometime we get into a condition in which we need to create POJO classes from an existing database. It could take time if the database has many tables. Instead of manually create them, we could use Hibernate Tool. Hibernate Tools is an eclipse plugin that can be installed from the eclipse marketplace.

In this post, we use Spring Tool Suite (STS) 4 instead of directly use the eclipse version of the IDE. Meanwhile the database we use is PostgreSQL. It should be applicable to any other database.

The steps of using Hibernate Tools is summarized as follow. I append many screenshots on this post in order the steps are easier to follow.

Step 1 : Install Hibernate Tools plugin

To install Hibernate Tools plugin, go to menu Help > Eclipse Marketplace .

Enter Hibernate Tools in the search field.

Follow the installation step until the installation success and finish. Eclipse will show us a pop up message to restart it once the installation finished.

Step 2 : Create Java Maven project

Before building a connection to the database, we will create a maven project that contains a hibernate configuration file and where the generated code will be placed. The following figures show the maven project configuration setting.

First, ensure we are in the Java persepective mode. From the menu File > New > Other… select Maven Project as the following figure.

Select the checked box of Create a simple project then press Next.

Fill the Group Id and Artifact Id. Select jar for the Packaging option, then click Finish

Create a namespace in the maven project where the generated code will be placed.

The project structure should look like the following picture.

Once the project skeleton constructued, edit the pom.xml by adding dependencies of the hibernate and database driver. The latest jar version could be found in Maven Repository

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>io.neutrofoton.lab</groupId>
  <artifactId>hibernatereverse</artifactId>
  <version>0.0.1-SNAPSHOT</version>

 <dependencies>

  <dependency>
      <groupId>org.hibernate</groupId>
      <artifactId>hibernate-core</artifactId>
      <version>5.4.24.Final</version>
  </dependency>
  
  <dependency>
      <groupId>org.postgresql</groupId>
      <artifactId>postgresql</artifactId>
      <version>42.2.18</version>
  </dependency>
      
  
 </dependencies>
</project>

Step 3 : Create Hibernate configuration

Before creating Hibernate configuration file, we need to switch to Hibernate perspective in the eclipse by opening menu Window > Perspective > Open Perspective > Other…. Then select Hibernate on the perspective list.

Once we are in the Hibernate perspective, we can create Hibernate configuration from menu File > Hibernate Configuration File (*.cfg.xml)

Placed the configuration file in the directory src/main/java, then click Next.

On the next dialog wizard, fill the configuration item according to our database specification. The database specification items to fill are database dialect, database driver, connection URL, default schema (if we want to generate for a specific schema, otherwise left it empty if we want to generate for all), username and password. On the dialog wizard, ensure to select the check box of Create a console configuration option.

In the Hibernate Configurations pane, the hibernate configuration that has been created should be listed in it. If the created hibernate configuration is not listed, press the Refresh or Rebuild configuration button in the top right corner of the pane. The generated Hibernate configuration should look like the following figure. We could expand the Database element on the Hibernate configuration to ensure we have a valid database connection.

Step 4 : Run Hibernate code generation

To run Hibernate code generation, ensure we select the active Hibernate Configuration in the Hibernate configuration pane. Then, open the menu of Run > Hibernate Code Generation Configuration…

In the Hibernate Code Generation Configuration, fill the package where the domain class will be in.

In the Exporter tab, select Use Java 5 syntax and Generate EJB3 annotations. In the exporter list option, we can select items in the list that we need. In this post we only select Domain code that has annotation as Hibernate mapping. Finally click Apply and Run to generate the domain code.

NOTES:

If the annotation does not generated in the domain class, open the Hibernate Configuration and edit it. Then change the Hibernate Version to 5.2. Finally run again as we have done on step 4.

References

  1. https://docs.jboss.org/tools/4.1.0.Final/en/hibernatetools/html_single/index.html
  2. https://stackoverflow.com/questions/50837574/annotation-not-created-when-generating-hibernate-mapping-files
  3. https://www.youtube.com/watch?v=KO_IdJbSJkI&ab_channel=CodeJava

PPTP on Ubuntu

| Comments

Previously, I have posted how to connect macOS to VPN server through PPTP protocol. This post decribed how to do the same thing for Ubuntu. I used Ubuntu 18.04.3 LTS for testing.

The first step is installing PPTP client for Ubuntu.

1
sudo apt-get -y install pptp-linux

Create VPN configuration file

1
sudo nano /etc/ppp/peers/myPPTP

paste the following script

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
pty "pptp IP_ADDRESS --nolaunchpppd --debug"
name USERNAME
password PASSWORD
remotename PPTP
require-mppe-128
require-mschap-v2
refuse-eap
refuse-pap
refuse-chap
refuse-mschap
noauth
debug
persist
maxfail 0
defaultroute
replacedefaultroute
usepeerdns

Then save and exit the file. Before executing VPN connection, change the file security attribute.

1
chmod 600  /etc/ppp/peers/myPPTP

To connect to the VPN server, type the following command.

1
pon myPPTP

To disconnect from the VPN server, run the following command.

1
poff myPPTP

If you fail connect to the VPN server, please check the firewall configuration.

References

  1. https://support.strongvpn.com/hc/en-us/articles/360003513553-PPTP-Setup-Debian-Ubuntu-Command-Line
  2. https://www.networkinghowtos.com/howto/connect-to-a-pptp-vpn-server-from-ubuntu-linux/

Configure PHP and Virtual Host macOS High Sierra

| Comments

Apache is pre installed on macOS High Sierra. We just need to run its service with the following command to activate it.

1
$ sudo apachectl start

Then we just open http://localhost via browser. Apache will display a default HTML page come with it.

Activating PHP Module

High Sierra also comes with PHP 7. So we don’t need to install it manually. To activate PHP module

  1. Edit /etc/apache2/httpd.conf
  2. Uncomment / remove remark of #LoadModule php7_module libexec/apache2/libphp7.so
  3. Save it and restart apache using bash $ sudo apachectl restart

After applying the steps above, the php module should be activated and ready to use. In this post we will test it after configuring virtual host.

Configuring Virtual Host

The steps of configuring apache virtual host are :

  1. Enabling virtual host configuration in apache config by editing /etc/apache2/httpd.conf.

     $ sudo nano /etc/apache2/httpd.conf
    
  2. Uncomment the section Include /private/etc/apache2/extra/httpd-vhosts.conf, then save it.

  3. Create site directory. As an example in this post, let’s create a Site folder in home directory called /Users/USERNAME/Sites. Our website sample directory will be put in it, let’s create a directory called /Users/USERNAME/Sites/neutro.io and create an /Users/USERNAME/Sites/neutro.io/index.php with simple PHP syntax.

     <?php
     phpinfo();
     ?>
    
  4. Create virtual host configuration by editing the virtual host config

     $ sudo nano /etc/apache2/extra/httpd-vhosts.conf
    

    The following code is an example of virtual host with domain name neutro.io

     <VirtualHost *:80>
         ServerName neutro.io
         ServerAlias www.neutro.io
         DocumentRoot "/Users/neutro/Sites/neutro.io"
    
     <Directory /Users/neutro/Sites/neutro.io>
             Options Indexes FollowSymLinks
             #Options All Indexes FollowSymLinks
             AllowOverride None
             Require all granted
     </Directory>
    
    
         ErrorLog "/private/var/log/apache2/neutro.io-error_log"
         CustomLog "/private/var/log/apache2/neutro.io-access_log" common
         ServerAdmin web@neutro.io
     </VirtualHost>
    

    In this example, we create a neutro.io virtual host that refers to /Users/neutro/Sites/neutro.io as physical directory.

  5. Register domain for localhost

    Since we use neutro.io as domain for localhost, we need to add the domain and www alias to resolve to the localhost address by editing

        $ sudo nano /etc/hosts
    

    and add the following line

        127.0.0.1   neutro.io   www.neutro.io
    
  6. Restart apache

     $ sudo apachectl restart
    

When we open in browser http://neutro.io, we should get a page that display PHP info.

Losing Default Localhost

After configuring the virtual host, we may lose the previous default localhost that points to /Library/WebServer/Documents/ directory. We may get a 403 Forbidden Error when visiting http://localhost. To get around this, we need to add in a vhost for localhost and declare this vhost before any of the others. The following code is our new Virtual host after adding config for localhost.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
<VirtualHost *:80>
    ServerName localhost
    DocumentRoot /Library/WebServer/Documents/
</VirtualHost>

<VirtualHost *:80>
    ServerName neutro.io
    ServerAlias www.neutro.io
    DocumentRoot "/Users/neutro/Sites/neutro.io"

   <Directory /Users/neutro/Sites/neutro.io>
        Options Indexes FollowSymLinks
        #Options All Indexes FollowSymLinks
        AllowOverride None
        Require all granted
   </Directory>


    ErrorLog "/private/var/log/apache2/neutro.io-error_log"
    CustomLog "/private/var/log/apache2/neutro.io-access_log" common
    ServerAdmin web@neutro.io
</VirtualHost>

Restart apache and open http://localhost in browser.

References

  1. https://websitebeaver.com/set-up-localhost-on-macos-high-sierra-apache-mysql-and-php-7-with-sslhttps
  2. https://coolestguidesontheplanet.com/set-up-virtual-hosts-in-apache-on-macos-high-sierra-10-13/

PPTP on macOS

| Comments

One day I need to connect my macOS to a network of client of the company I work for via Point-to-Point Tunneling Protocol (PPTP) VPN. Unfortunately Apple removed PPTP support on macOS Sierra, so I had to find an alternative for that. Some of them I found are third parties application that need a one time buying or annual subscription. In fact, Apple just remove the user interface option for PPTP VPN, meanwhile the libraries of it are still available on Sierre.

Since the libraries of PPTP are still available on Sierra, theoritically we should be able to call the libraries via terminal. Finally I found 3 blogs that write about PPTP protocol on macos and I put them in a reference section in this blog. Basically the three of them use the same technique that’s write a script contains configuration of PPTP that’s put in /etc/ppp/peers/ and call it via pppd command via terminal.

First of all create a file called /etc/ppp/peers/pptpvpn-client1

1
$ sudo /etc/ppp/peers/pptpvpn-client1

Fill the pptpvpn-client1 that contains configuration that pppd daemon will refer to connect.

/etc/ppp/peers/pptpvpn-client1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
plugin PPTP.ppp
noauth
# logfile /tmp/ppp.log
remoteaddress "xxx.xxx.xxx.xxx"
user "xxxxxx"
password "xxxxxxxx"
redialcount 1
redialtimer 5
idle 1800
# mru 1368
# mtu 1368
receive-all
novj 0:0
ipcp-accept-local
ipcp-accept-remote
# noauth
refuse-eap
refuse-pap
refuse-chap-md5
hide-password
mppe-stateless
mppe-128
# require-mppe-128
looplocal
nodetach
# ms-dns 8.8.8.8
usepeerdns
# ipparam gwvpn
defaultroute
debug

Then open terminal and call

1
$ sudo pppd call pptpvpn-client1

If you cannot connect with the configuration code I use, you can check the error messages displayed in terminal. May be some configuration items do not match with the vpn server setting you connect to.

If the you got no any error messages and connection established with your VPN network you can open a new tab on the terminal and try to ping to an ip address in the VPN local network.

References

  1. https://smallhacks.wordpress.com/2016/12/20/pptp-on-osx-sierra/
  2. https://malucelli.net/2017/05/16/pptp-vpn-on-macos-sierra/
  3. https://www.cts-llc.net/2017/02/21/pptp-on-osx-just-one-last-time.html

Create and Consume C++ Class DLL on Windows

| Comments

while visiting clients of the company I work on, sometime I still found some applications especially desktop application build on unmanaged code (such as Delphi, Visual Basic 6, C++, etc). Even though at the time of this blog post, many application build on .NET (managed code) on Windows platform. There are various reasons why they do not migrate to managed code which has some advantages over unmanaged code (such as the application still run well with the version of OS they use, rewrite app will need extra cost, etc). This means unmanaged code application is not dead at all for LOB app, even though the percentage is much lower than the managed one.

Maybe this topic seems out of date topic in the .NET era, but at least this post as a note for my self in case I need it on the other day.

While developing an application, usually we want to share some of our code with other application. Dynamic Link Library (DLL) is Microsoft’s implementation of the shared library concept in the Microsoft Windows. The term DLL in this post will refer to unmanaged code and only focus to the one build with Visual C++ compiler on Windows environment.

When we create a DLL, we also create a .lib file that contains information of exported class or functions. When we build an executable that calls the DLL, the linker uses the exported symbols in the .lib file to store this information for the loader. When the loader loads a DLL, the DLL is mapped into the memory space of the executable.

An executable file links to (or loads) a DLL in one of two ways, implicit or explicit linking. In this post will create simple sample both of them how C++ class exported in the two ways. The samples in this post created using IDE Microsoft Visual Studio 2013 Ultimate. To simplify the code, I just created a single solution contains a Win32 DLL project and a console application client. The DLL project contains classes for both sample implicit and explicit linking. Either the console application contains sample code for implicit and explicit linking caller. Here is the classes I use in this sample.

Implicit Linking

Implicit linking, where the operating system loads the DLL when the executable using it is loaded. The executable client calls the exported functions of the DLL just as if the functions were statically linked and contained within the executable. Implicit linking is sometimes referred to as static load or load-time dynamic linking[4]. Now let’s create a sample of DLL with implicit linking.

First of all, create an empty solution in Visual Studio by selecting File > New Project > scroll down on the left pane, expand Other Project Types > Visual Studio Solutions select Black Solution. Fill the solution name as VCppDLL.

Now we have an empty solution in Visual Studio. Right click the VCppDLL solution > Add > New Project. In the left pane of the New Project dialog box, expand Installed templates Visual C++, and then select Win32. Fill the project name as MathWin32DLL, then click OK.

On the Win32 Application Wizard dialog in the Application Settings part, select DLL and check Empty project, then click Finish

Now we have an empty C++ DLL project in the Visual Studio solution. As the class diagram picture above, let create a simple BaseMath class. Right click the MathWin32DLL project > Add > Class. On Visual C++ template on the left pane dialog, select C++ Class > click Add. On the Generic C++ Class Wizard, fill the Class name as BaseMath then click Finish. Edit the BaseMath.h with the following code.

BaseMath.h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
// If you are building the DLL project on the command line,
// use the /D compiler option to define the MATHDLLWIN32_EXPORTS symbol.

#pragma once

#ifdef MATHWIN32DLL_EXPORTS
#define Math_API __declspec(dllexport)
#else
#define Math_API __declspec(dllimport)
#endif // MATHWIN32DLL_EXPORTS

#include <string>

using namespace std;

namespace core
{
  class BaseMath
  {
  public:
      virtual void Destroy()
      {
          delete this;
      }

      virtual string Say(string& s) = 0;
      virtual double Calculate(const double a, const double b) = 0;
  };
}

We can delete BaseMath.cpp file since we will make the BaseMath as an abstract class.

In Visual Studio, by default the New Project template for a DLL adds PROJECTNAME_EXPORTS to the defined preprocessor symbols for the DLL project. We can see the preprocessor symbols definition in Property Pages of MathWin32DLL project in the Configuration Properties > C/C++ > Preposesor > Preposesor Definitions.

In the code of BaseMath.h, when MATHWIN32DLL_EXPORTS symbol is defined, the Math_API symbol is set to __declspec(dllexport) modifier otherwise it is set to __declspec(dllimport). The __declspec(dllexport) modifier can be applied to classes, functions, or variables that tells the compiler and linker to export them from the DLL so that it can be used by other applications.

Meanwhile when we include BaseMath.h in client project, Math_API is set to __declspec(dllimport). This modifier optimizes the import of the exported class in an application.

For the next, let’s create another class called AddOperationMath. Edit the AddOperationMath.h and AddOperationMath.cpp respectively as follow.

AddOperationMath.h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
#pragma once
#include "BaseMath.h"

using namespace std;

namespace core
{
  // MS Visual C++ compiler emits C4275 warning about not exported base class.
  class Math_API AddOperationMath : public BaseMath
  {
  public:
      AddOperationMath();
      virtual ~AddOperationMath();


      string Say(string& s);
      double Calculate(const double a, const double b);

      static double Add(const double a, const double b);
  };
}
AddOperationMath.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
#include "AddOperationMath.h"

namespace core
{
  AddOperationMath::AddOperationMath()
  {
  }

  AddOperationMath::~AddOperationMath()
  {
  }

  string AddOperationMath::Say(string& s)
  {
      string result =  s + " is calling add operation of class AddOperationMath";
      return result;
  }

  double AddOperationMath::Calculate(const double a, const double b)
  {
      return a + b;
  }

  double AddOperationMath::Add(const double a, const double b)
  {
      return a + b;
  }
}

The class AddOperationMath inherits from BaseMath. We also mark the AddOperationMath class with Math_API macro that’s defined in BaseMath.h which means we will expose the AddOperationMath class in the DLL to executable client application. When we compile the DLL project, we should get a warning

1
warning C4275: non dll-interface class 'core::BaseMath' used as base for dll-interface class 'core::AddOperationMath'

In this case, ideally we should export (mark with Math_API macro) both core::BaseMath and core::AddOperationMath to make the compiler does not fire the warning message.

To complete our sample, let’s create another project called MathWin32ClientConsole as we did the creation of MathWin32DLL project, except select Console Application instead of DLL in the Application Settings dialog.

In the MathWin32ClientConsole project, right click > Add > New Item. Select Visual C++ project template on the left pane, then select C++ File (.cpp). Fill the name with Main.cpp.

To make the MathWin32ClientConsole project has reference to MathWin32DLL project, right click MathWin32ClientConsole project > Properties. Scroll up the Property Pages dialog, expand Common Properties on the left pane > select References. Click Add New Reference button, select Projects and check the MathWin32DLL > OK. Now you should see MathWin32DLL added to the References pane as the following picture.

To make the AddOperationMath class is recognized in the MathWin32ClientConsole project, we have to include AddOperationMath.h. We can copy the AddOperationMath.h and BaseMath.h to the MathWin32ClientConsole project. But it is not a good way in our scenario, because if we make changes to one of them, we have to recopy it to the MathWin32ClientConsole project directory. To avoid this manual copy, we can include the MathWin32DLL project directory to the MathWin32ClientConsole so that we can include any header files of MathWin32DLL to MathWin32ClientConsole if needed. To do that open the Property pages of MathWin32ClientConsole, select Configuration Properties > C/C++ > General. Select the drop-down control next to the Additional Include Directories edit box, and then choose <Edit...>. Select the top pane of the Additional Include Directories dialog box to enable an edit control. In the edit control, fill $(SolutionDir)\MathWin32DLL which tells to Visual Studio to scan or search header files that we include in directory MathWin32DLL inside solution directory.

Now we can include header file defined in MathWin32DLL from MathWin32ClientConsole. Let create code that call class defined in the DLL.

Main.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
#include <iostream>
#include <string>

#include "AddOperationMath.h"

using namespace std;


void CallDLLByImplicitLinking(double a, double b, string s);

int main()
{
  double a = 2;
  double b = 4;
  string s = "neutro";

  CallDLLByImplicitLinking(a, b, s);


  cout << "Press any key to exit ";
  cin.get();

  return 0;
}



void CallDLLByImplicitLinking(double a, double b, string s)
{
  cout << a << " + " << b << " = " << core::AddOperationMath::Add(a, b) << endl;

  core::AddOperationMath* math = new core::AddOperationMath();
  cout << math->Say(s) << endl;

  delete math;

  cout << endl << "===============================================================" << endl;
}
Output
1
2
3
4
2 + 4 = 6
neutro is calling add operation of class AddOperationMath

Press any key to exit

There is no need to explicitly specify a calling convention for exporting classes or their methods. By default, the C++ compiler uses the __thiscall calling convention for class methods. However, due to different naming decoration schemes that are used by different compilers, the exported C++ class can only be used by the same compiler and by the same version of the compiler. Only the MS Visual C++ compiler can use this DLL now. Both the DLL and the client code must be compiled with the same version of MS Visual C++ in order to ensure that the naming decoration scheme matches between the caller and the callee[5]

To use a DLL by implicit linking, an executable must include the header files that declare the data, functions or C++ classes exported by the DLL in each source file that contains calls to the exported data, functions, and classes. The classes, functions, and data exported by the DLL must all be marked __declspec(dllimport) in the header file. From a coding perspective, calls to the exported functions are just like any other function call.

To build the calling executable file, we must link with the import library (.lib). If we use an external makefile or build system, we need to specify the file name of the import library where we list other object (.obj) files or libraries that we link.

The operating system must be able to locate the DLL file when it loads the calling executable. This means that we must deploy or verify the existence of the DLL when our application is installed.

Explicit Linking

Explicit linking, where the operating system loads the DLL on demand at runtime. An executable that uses a DLL by explicit linking must make function calls to explicitly load and unload the DLL and to access the functions exported by the DLL. Unlike calls to functions in a statically linked library, the client executable must call the exported functions in a DLL through a function pointer. Explicit linking is sometimes referred to as dynamic load or run-time dynamic linking[4].

To use a DLL by explicit linking, applications must make a function call to explicitly load the DLL at run time. To explicitly link to a DLL, an application must [4]:

  1. Call LoadLibrary, LoadLibraryEx, or a similar function to load the DLL and obtain a module handle.

  2. Call GetProcAddress to obtain a function pointer to each exported function that the application calls. Because applications call the DLL functions through a pointer, the compiler does not generate external references, so there is no need to link with an import library. However, you must have a typedef or using statement that defines the call signature of the exported functions that you call.

  3. Call FreeLibrary when done with the DLL.

To create a sample for explicit linking, we will use an abstract interface (a class with pure virtual methods, and no data) and create a factory method for object instantiation.

On the MathWin32DLL create a new class called LogarithmicMath. Edit the header and implementation files as follow

LogarithmicMath.h
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#pragma once

#include "BaseMath.h"

namespace core
{
  class LogarithmicMath : public BaseMath
  {
  public:
      LogarithmicMath();
      virtual ~LogarithmicMath();


      string Say(string& s);
      double Calculate(const double a, const double b);
  };
}
LogarithmicMath.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#include "LogarithmicMath.h"
#include <math.h>

namespace core
{
  LogarithmicMath::LogarithmicMath()
  {
  }


  LogarithmicMath::~LogarithmicMath()
  {
  }

  string LogarithmicMath::Say(string& s)
  {
      string result = s + " is calling Logarithmic operation of class LogarithmicMath";
      return result;
  }

  double LogarithmicMath::Calculate(const double a, const double b)
  {
      return log10(b) / log10(a);
  }
}

Next, create a Factory class that encapsulates LogarithmicMath instantiation and will be called from client app.

Factory.h
1
2
3
4
5
#include "BaseMath.h"

using namespace std;

extern "C" Math_API core::BaseMath* __cdecl CreateLogarithmicMath();
Factory.cpp
1
2
3
4
5
6
7
8
9
#include "Factory.h"
#include "LogarithmicMath.h"

using namespace std;

core::BaseMath* CreateLogarithmicMath()
{
  return new core::LogarithmicMath();
}

We can see that the LogarithmicMath class look like a standard C++ class. Instead of directly export the LogarithmicMath class, we use Factory that handle the export technics.

In the Factory.h defined extern "C" which tells the C++ compiler that the linker should use the C calling convention. It is required in order to prevent the mangling of the function name. So, this function is exposed as a regular C function, and can be easily recognized by any C-compatible compiler. The name itself is exported from the DLL unmangled (CreateLogarithmicMath). The Math_API tells the linker to export the CreateLogarithmicMath method from the DLL. __cdecl is the default calling convention for C and C++ programs.

Now let create a sample code in the MathWin32ClientConsole by editing the Main.cpp as following.

Main.cpp
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
#include <iostream>
#include <string>

#include <Windows.h>

#include "AddOperationMath.h"

using namespace std;

typedef core::BaseMath* (__cdecl *LogarithmicMathFactory)();

void CallDLLByImplicitLinking(double a, double b, string s);
void CallDLLByExplicitLinking(double a, double b, string s);

int main()
{
  double a = 2;
  double b = 4;
  string s = "neutro";

  CallDLLByImplicitLinking(a, b, s);
  CallDLLByExplicitLinking(a, b, s);


  cout << "Press any key to exit ";
  cin.get();

  return 0;
}



void CallDLLByImplicitLinking(double a, double b, string s)
{
  cout << a << " + " << b << " = " << core::AddOperationMath::Add(a, b) << endl;

  core::AddOperationMath* math = new core::AddOperationMath();
  cout << math->Say(s) << endl;

  delete math;

  cout << endl << "===============================================================" << endl;
}


void CallDLLByExplicitLinking(double a, double b, string s)
{
  HMODULE dll = LoadLibrary(L"MathWin32DLL.dll");
  if (!dll)
  {
      cout << "Fail load library" << endl;
      return;
  }

  LogarithmicMathFactory factory = reinterpret_cast<LogarithmicMathFactory>(GetProcAddress(dll, "CreateLogarithmicMath"));

  if (!factory)
  {
      cerr << "Unable to load CreateLogarithmicMath from DLL!\n";
      FreeLibrary(dll);
      return;
  }

  core::BaseMath* instance = factory();
  cout << a << " log (" << b << ") = " << instance->Calculate(a, b) << endl;
  cout << instance->Say(s) << endl;

  instance->Destroy();

  FreeLibrary(dll);

  cout << endl << "===============================================================" << endl;
}

Now build and run the MathWin32ClientConsole, we should get the following output.

output
1
2
3
4
5
6
7
8
9
2 + 4 = 6
neutro is calling add operation of class AddOperationMath

===============================================================
2 log (4) = 2
neutro is calling Logarithmic operation of class LogarithmicMath

===============================================================
Press any key to exit

In order to ensure proper resource release, an abstract interface provides an additional method for the disposal of an instance. In this case we provide Destroy method. Calling this method manually can be tedious and error prone. It’s recommend use smart pointer for auto resource release instead of manual release.

The code of this article can be found here

References

  1. https://docs.microsoft.com/en-us/cpp/build/walkthrough-creating-and-using-a-dynamic-link-library-cpp
  2. https://msdn.microsoft.com/en-us/library/1ez7dh12.aspx
  3. https://docs.microsoft.com/en-us/cpp/build/dlls-in-visual-cpp
  4. https://docs.microsoft.com/en-us/cpp/build/linking-an-executable-to-a-dll#determining-which-linking-method-to-use
  5. https://www.codeproject.com/Articles/28969/HowTo-Export-C-classes-from-a-DLL
  6. http://eli.thegreenplace.net/2011/09/16/exporting-c-classes-from-a-dll

WinMerge and DiffMerge as Git Diff Merge Tool

| Comments

On software development while working with source control, it’s inevitable sometime we get our code conflicts with other, since we work in a team. There are many diff and merge tools out there and some of them can be integrated with with Git. In this post I just want to note what I did in my development machine (Windows 7 and macOS Sierra)

DiffMerge on macOS

For my macOS development machine I use DiffMerge. Actually DiffMerge is not only available for macOS, but also for Windows and Linux. So we can use it as Git diff merge tool as well on Windows and Linux. To configure Git to use DiffMerge can be done by running the following command via terminal.

1
2
3
4
5
6
7
8
9
10
$ git config --global mergetool.prompt false
$ git config --global mergetool.keepBackup false
$ git config --global mergetool.keepTemporaries false

$ git config --global diff.tool diffmerge
$ git config --global difftool.diffmerge.cmd 'diffmerge "$LOCAL" "$REMOTE"'

$ git config --global merge.tool diffmerge
$ git config --global mergetool.diffmerge.cmd 'diffmerge --merge --result="$MERGED" "$LOCAL" "$(if test -f "$BASE"; then echo "$BASE"; else echo "$LOCAL"; fi)" "$REMOTE"'
$ git config --global mergetool.diffmerge.trustExitCode true

The command will add the following config code in global .gitconfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[mergetool]
  prompt = false
  keepBackup = false
  keepTemporaries = false

[diff]
  tool = diffmerge

[difftool "diffmerge"]
  cmd = diffmerge \"$LOCAL\" \"$REMOTE\"

[merge]
  tool = diffmerge

[mergetool "diffmerge"]
  cmd = "diffmerge --merge --result=\"$MERGED\" \"$LOCAL\" \"$(if test -f \"$BASE\"; then echo \"$BASE\"; else echo \"$LOCAL\"; fi)\" \"$REMOTE\""
  trustExitCode = true

We can also directly edit the .gitconfig and manually add the config code.

WinMerge 2.x on Windows

WinMerge is an open source differencing and merging tool for Windows. It can compare both folders and files, presenting differences in a visual text format that is easy to understand and handle. At the time of writing this blog post, WinMerge 3 is still in progress of development and no release yet. WinMerge 3 will be modern compare/synchronization tool. It will be based on Qt library and cross-platform. You can use the same tool in Windows and in Linux. So for now and so on in this blog post, WinMerge term means WinMerge 2.x.

After installing WinMerge, to configure it as diff and merge tool of Git is by adding /editing the following config setting to C:\Users\{UserName}\.gitconfig

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[mergetool]
  prompt = false
  keepBackup = false
  keepTemporaries = false

[merge]
  tool = winmerge

[mergetool "winmerge"]
  name = WinMerge
  trustExitCode = true
  cmd = "/c/Program\\ Files\\ \\(x86\\)/WinMerge/WinMergeU.exe" -u -e -dl \"Local\" -dr \"Remote\" $LOCAL $REMOTE $MERGED

[diff]
  tool = winmerge

[difftool "winmerge"]
  name = WinMerge
  trustExitCode = true
  cmd = "/c/Program\\ Files\\ \\(x86\\)/WinMerge/WinMergeU.exe" -u -e $LOCAL $REMOTE

The config above also can be configured by Git bash shell with –global parameter instead of manual edit via text editor.

Now, whenever you want it to launch diffs just use difftool[1]:

1
2
3
4
5
6
7
8
# diff the local file.m against the checked-in version
$ git difftool file.m

# diff the local file.m against the version in some-feature-branch
$ git difftool some-feature-branch file.m

# diff the file.m from the Build-54 tag to the Build-55 tag
$ git difftool Build-54..Build-55 file.m

To resolve merge conflicts

1
$ git mergetool

References

  1. http://twobitlabs.com/2011/08/install-diffmerge-git-mac-os-x/
  2. http://winmerge.org/

Arduino and NEO-6M GPS Module

| Comments

Couple days ayo, I met a friend of mine when I was at university. He plays extensively with Arduino, Raspberry Pi, Orange Pi and other IoT stuff. He showed me how interesting IoT is, include wiring modules, and surely its programming. I remember that a few months ago, I got Arduino kit with GPS module from another friend of mine. The items were idle since I have other things to do in my work. Yesterday, I just have a free time to play with the Arduino kit. And I never play or explore Arduino before.

On the first time exploring Arduino, the hardware I use are :

  1. Arduino Uno
  2. GPS Module NEO-6M-0-001

To get working Arduino with GPS module, I use TinyGPS library. TinyGPS is designed to provide most of the NMEA GPS functionality. The detail description about TinyGPS can found here. TinyGPS is additional library for Arduino. So we need to install it before include it to our project. The steps of installation additional Arduino libraries can be found here

The table below shows wiring between Arduino and NEO-6M-0-001 GPS module

NEO-6M-0-001 GPS Arduino Uno Cable
Vcc Power 3.3 Volt Black
GND GND White
TXD RX pin 4 Gray
RXD TX pin 3 Magenta

To simplify our testing, I grab sample source code provided by TinyGPS. On This sample, I Set the data rate in bits per second (baud) for serial data transmission to 9600.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
#include <SoftwareSerial.h>

#include <TinyGPS.h>

/ This sample code demonstrates the normal use of a TinyGPS object.
   It requires the use of SoftwareSerial, and assumes that you have a
   4800-baud serial GPS device hooked up on pins 4(rx) and 3(tx).
/

TinyGPS gps;
SoftwareSerial ss(4, 3);

static void smartdelay(unsigned long ms);
static void print_float(float val, float invalid, int len, int prec);
static void print_int(unsigned long val, unsigned long invalid, int len);
static void print_date(TinyGPS &gps);
static void print_str(const char str, int len);

void setup()
{
  //Serial.begin(115200);
  Serial.begin(9600);

  Serial.print("Testing TinyGPS library v. "); Serial.println(TinyGPS::library_version());
  Serial.println("by Mikal Hart");
  Serial.println();
  Serial.println("Sats HDOP Latitude  Longitude  Fix  Date       Time     Date Alt    Course Speed Card  Distance Course Card  Chars Sentences Checksum");
  Serial.println("          (deg)     (deg)      Age                      Age  (m)    — from GPS —-  —- to London  —-  RX    RX        Fail");
  Serial.println("————————————————————————————————————————————-");

  //ss.begin(4800);
  ss.begin(9600);
  delay(1000);
}

void loop()
{
  float flat, flon;
  unsigned long age, date, time, chars = 0;
  unsigned short sentences = 0, failed = 0;
  static const double LONDON_LAT = 51.508131, LONDON_LON = -0.128002;

  print_int(gps.satellites(), TinyGPS::GPS_INVALID_SATELLITES, 5);
  print_int(gps.hdop(), TinyGPS::GPS_INVALID_HDOP, 5);
  gps.f_get_position(&flat, &flon, &age);
  print_float(flat, TinyGPS::GPS_INVALID_F_ANGLE, 10, 6);
  print_float(flon, TinyGPS::GPS_INVALID_F_ANGLE, 11, 6);
  print_int(age, TinyGPS::GPS_INVALID_AGE, 5);
  print_date(gps);
  print_float(gps.f_altitude(), TinyGPS::GPS_INVALID_F_ALTITUDE, 7, 2);
  print_float(gps.f_course(), TinyGPS::GPS_INVALID_F_ANGLE, 7, 2);
  print_float(gps.f_speed_kmph(), TinyGPS::GPS_INVALID_F_SPEED, 6, 2);
  print_str(gps.f_course() == TinyGPS::GPS_INVALID_F_ANGLE ? " " : TinyGPS::cardinal(gps.f_course()), 6);
  print_int(flat == TinyGPS::GPS_INVALID_F_ANGLE ? 0xFFFFFFFF : (unsigned long)TinyGPS::distance_between(flat, flon, LONDON_LAT, LONDON_LON) / 1000, 0xFFFFFFFF, 9);
  print_float(flat == TinyGPS::GPS_INVALID_F_ANGLE ? TinyGPS::GPS_INVALID_F_ANGLE : TinyGPS::course_to(flat, flon, LONDON_LAT, LONDON_LON), TinyGPS::GPS_INVALID_F_ANGLE, 7, 2);
  print_str(flat == TinyGPS::GPS_INVALID_F_ANGLE ? " " : TinyGPS::cardinal(TinyGPS::course_to(flat, flon, LONDON_LAT, LONDON_LON)), 6);

  gps.stats(&chars, &sentences, &failed);
  print_int(chars, 0xFFFFFFFF, 6);
  print_int(sentences, 0xFFFFFFFF, 10);
  print_int(failed, 0xFFFFFFFF, 9);
  Serial.println();

  smartdelay(1000);
}

static void smartdelay(unsigned long ms)
{
  unsigned long start = millis();
  do
  {
    while (ss.available())
      gps.encode(ss.read());
  } while (millis() - start < ms);
}

static void print_float(float val, float invalid, int len, int prec)
{
  if (val == invalid)
  {
    while (len > 1)
      Serial.print('');
    Serial.print(' ');
  }
  else
  {
    Serial.print(val, prec);
    int vi = abs((int)val);
    int flen = prec + (val < 0.0 ? 2 : 1); // . and -
    flen += vi >= 1000 ? 4 : vi >= 100 ? 3 : vi >= 10 ? 2 : 1;
    for (int i=flen; i<len; ++i)
      Serial.print(' ');
  }
  smartdelay(0);
}

static void print_int(unsigned long val, unsigned long invalid, int len)
{
  char sz[32];
  if (val == invalid)
    strcpy(sz, "");
  else
    sprintf(sz, "%ld", val);
  sz[len] = 0;
  for (int i=strlen(sz); i<len; ++i)
    sz[i] = ' ';
  if (len > 0)
    sz[len-1] = ' ';
  Serial.print(sz);
  smartdelay(0);
}

static void print_date(TinyGPS &gps)
{
  int year;
  byte month, day, hour, minute, second, hundredths;
  unsigned long age;
  gps.crack_datetime(&year, &month, &day, &hour, &minute, &second, &hundredths, &age);
  if (age == TinyGPS::GPS_INVALID_AGE)
    Serial.print(" **** ");
  else
  {
    char sz[32];
    sprintf(sz, "%02d/%02d/%02d %02d:%02d:%02d ",
        month, day, year, hour, minute, second);
    Serial.print(sz);
  }
  print_int(age, TinyGPS::GPS_INVALID_AGE, 5);
  smartdelay(0);
}

static void print_str(const char str, int len)
{
  int slen = strlen(str);
  for (int i=0; i<len; ++i)
    Serial.print(i<slen ? str[i] : ' ');
  smartdelay(0);
}

The output of this testing on Serial monitor showed as follow.

If you do not get similar output as above (get * on table output) it means your arduino fails get data from GPS module. Please ensure your GPS led is blinking which indicate it receives data from the GPS satellites.

The other thing that you should ensure is you have the right wiring. RX of the Arduino (pin 4, according to the SoftwareSerial statement) goes to the TX of the GPS. Arduino pin 3 (ss TX) goes to the GPS RX.

To validate the accuracy of GPS output (Latitude, Longitude) showed on Arduino Serial Monitor you can check it on google map.

References

Import Existing Git Repository to Another

| Comments

While working with git, we may need to import source code from an existing git repository to our working copy. Merging it and pushing to origin master. The scenario that I had was :


  • I have a project template that I store on a git repository. Let say the url is http://server/git/template.git
  • I have another git repository with url http://server/git/project1.git

What I need from the two repositories are importing all contents (libraries, sources, etc) from template repository into my project1 working copy. Since I don’t want to coding from zero. To get into what I need, here are the steps.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# git clone project1
$ git clone http://server/git/project1.git
$ cd project1

# add remote url named REMOTE.TEMPLATE
$ git remote add REMOTE.TEMPLATE http://server/git/template.git

# fetch from REMOTE.TEMPLATE remote url
$ git fetch REMOTE.TEMPLATE

# checkout REMOTE.TEMPLATE/master and create a new branch called TEMPLATE
$ git checkout -b TEMPLATE REMOTE.TEMPLATE/master

#switch back to master branch
$ git checkout master

# merge TEMPLATE brach to master branch
$ git merge TEMPLATE

# commit changes
$ git commit

The next step is checking the merge result on our working copy of master branch. If what we have been imported already there, now we can remove the remote URL of REMOTE.TEMPLATE and TEMPLATE branch to get rid of the extra branch before pushing.

1
2
3
4
5
6
7
8
# remove REMOTE.TEMPLATE remote address
$ git remote rm REMOTE.TEMPLATE

# remove template branch. It is useful to get rid of the extra branch before pushing
$ git branch -d TEMPLATE

# push to remote origin/master
$ git push

References

  1. http://stackoverflow.com/questions/1683531/how-to-import-existing-git-repository-into-another