What is milter?

Every one gets tons of email these days. This includes emails about super duper offers from amazon to princess and wealthy businessmen trying to offer their money to you from some African country that you have never heard of. In all these emails in your inbox there lies one or two valuable emails either from your friends, bank alerts, work related stuff. Spam is a problem that email service providers are battling for ages. There are a few opensource spam fighting tools available like SpamAssasin or SpamBayes.

What is milter ?

Simply put – milter is mail filtering technology. Its designed by sendmail project. Now available in other MTAs also. People historically used all kinds of solutions for filtering mails on servers using procmail or MTA specific methods. The current scene seems to be moving forward to sieve. But there is a huge difference between milter and sieve. Sieve comes in to picture when mail is already accepted by MTA and had been handed over to MDA. On the other hand milter springs into action in the mail receiving part of MTA. When a new connection is made by remote server to your MTA, your MTA will give you an opportunity to accept of reject the mail every step of the way from new connection, reception of each header, and reception of body.

milter stages
milter protocol various stages

The above picture depicts simplified version of milter protocol working. Full details of milter protocol can be found here https://github.com/avar/sendmail-pmilter/blob/master/doc/milter-protocol.txt  . Not only filtering; using milter, you can also modify message or change headers.

HOW DO I GET STARTED WITH CODING MILTER PROGRAM ?

If you want to get started in C you can use libmilter.  For Python you have couple of options:

  1. pymilter –  https://pythonhosted.org/milter/
  2. txmilter – https://github.com/flaviogrossi/txmilter

Postfix supports milter protocol. You can find every thing related to postfix’s milter support in here – http://www.postfix.org/MILTER_README.html

WHY NOT SIEVE WHY MILTER ?

I found sieve to be rather limited. It doesn’t offer too many options to implement complex logic. It was purposefully made like that. Also sieve starts at the end of mail reception process after mail is already accepted by MTA.

Coding milter program in your favorite programming language gives you full power and allows you to implement complex , creative stuff.

WATCHOUT!!!

When writing milter programs take proper care to return a reply to MTA quickly. Don’t do long running tasks in milter program when the MTA is waiting for reply. This will have crazy side effects like remote parties submitting same mail multiple time filling up your inbox.

Implementing Webhook Handler in Python.

What is Webhook ?

Webhook is an asynchronous HTTP callback on an event occurrence. It is a simple server to server communication for reporting a specific event occurred on a server. The server on which event occurred will fire a HTTP POST request to another server on a URL which is provided by receiving server.

For example, whenever your colleague pushes code commits to github, an event has occurred on github’s server. Now if a webhook URL is provided in github settings, a webhook will be fired to that URL. This webhook will be a HTTP POST request with commit details inside the body in a specified format.  More details on github webhook can be found here.

In this post, I will share my experience of implementing webhook handler in python. For the readers, basic knowledge on implementing web application in python would be better.

Webhook Handler

A Webhook can be handled by simply providing a URL endpoint in a web application. Following is an example using Django. Add webhook url in urls.py

Now create view function in views.py which will parse the data and process it.  In most of the cases, webhook data is sent in JSON format. So lets load the webhook data and sent the data to process_webhook function.

Most of the web applications accept POST request after verifying CSRF token, but here we need to exempt it from this check. So put @csrf_token decorator above the view function. Also put an @require_post decorator to ensure the request is only POST.

The above implementation of URL endpoint will remain different for various other python web framework like Flask, tornado, twisted. But the below code  process_webhook function implementation will remain same irrespective of any framework.

Processing event

There may be different type events we need to handle. So, before proceeding to implement process_webhook function, lets create a python module named webhook_events.py, which will contain a single function for each type of event wherein will be the logic for that particular event. In other words, we are going to map event name with its function, which will handle the logic for that particular type of webhook event.

There are many ways to implement process_webhook function and how we map a webhook event with its function. We are going to discuss different implementation of process_webhook based on extendability. Most basic version of that is below.

A Better way

Now suppose, there are 10s of webhook to be served. We certainly don’t want to write repetitive code. So below is a better way of implementing process_webhook. Here we just replace dot in event name with underscore, so that we get the function name written in webhook_events.py for that event. If the function is not found that means event is not registered (not being served). In this way, no matter the number webhook to be served, just write the function to handle it, in webhook_events.py

Decorators

More robust and pythonic way of implementing process_webhook is by using decorators. Lets define a decorator in webhook_events.py which will map the event_name to its function. Here the EVENT_MAP is dictionary inside a setting module, which will contain event name as key and event function as its value.

In this case, the process_webhook will look like below:

This is the way which I prefer to implement webhook handler in python. How would you prefer ? Please feel free to comment below.

FCM – send push notifications using Python

What is FCM ?

FCM – Firebase Cloud Messaging is a cross-platform  ( Android, iOS and Chrome ) messaging solution that lets you reliably deliver messages at no cost. FCM is best suited if you want to send push notification to your app which you built to run on Android and iOS. The advantage you get is you don’t have to separately deal with GCM (Google Cloud Messaging deprecated now) and Apple’s APNS. You hand over your notification message to FCM and FCM takes care of communicating with apple’s APNS and Android messaging servers to reliably deliver those messages.

fcm-2

Using FCM we can send message to single device or multiple devices.  There are two different types of messages, notification and data. Notification messages include JSON keys that are understood and interpreted by phone’s operating system. If you want to include customized app specific JSON keys use data message. You can combine both notification and data JSON objects in single message. You can also send messages with different priority.

Note : – You need to set priority  to high  if you want phone to wake up and show notification on screen

Sending message with Python

We can use PyFCM to send messages via FCM. PyFCM is good for synchronous ( blocking ) python. We will discuss non-blocking option in next paragraph.

Install PyFCM using following command

The following code will send a push notification to

So, the PyFCM API is the pretty straight forward to use.

Sending FCM push notification using Twisted

PyFCM discussed in above paragraph is good enough if you want to send messages in blocking fashion. If you have to send high number of concurrent messages then using Twisted is a good option.

Twisted Matrix
Twisted Matrix

Network operations performed using twisted library don’t block. Thus it’s a good choice when network concurrency is required by program. We can use txFCM library to send FCM messages using twisted

Install txFCM using following command

Following code send FCM message using txFCM

txFCM is built on top of PyFCM so all the API call that are available in PyFCM are also available in txFCM.

What is DBF file? How to read it in linux and python?

What is DBF files ?

A DBF file is a standard database file used by dBASE, a database management system application. It organises data into multiple records with fields stored in an array data type. DBF files are also compatible with other “xBase” database programs, which became an important feature because of the file format’s popularity.

Tools which can read or open DBF files

Below are list of program which can read and open dbf file.

  • Windows
    1. dBase
    2. Microsoft Access
    3. Microsoft Excel
    4. Visual Foxpro
    5. Apache OpenOffice
    6. dbfview
    7. dbf Viewer Plus
  • Linux
    1. Apache OpenOffice
    2. GTK DBF Editor

How to read file in linux ?

“dbview” command available in linux, which can read dbf files.

Below code snippet show how to use dbview command.

 How to read it using python ?

dbfread” is the library available in python to read dbf files. This library reads DBF files and returns the data as native Python data types for further processing.

dbfread requires python 3.2 or 2.7.  dbfread is a pure python module, so doesn’t depend on any packages outside the standard library.

You can install library by the command below.

The below code snippet can read dbf file and retrieve data as python dictionary.

You can also use the with statement:

By default the records are streamed directly from the file.  If you have enough memory you can load them into a list instead. This allows random access

 How to Write content in DBF file using python ?

dbfpy is a python-only module for reading and writing DBF-files.  dbfpy can read and write simple DBF-files.

You can install it by using below command

The below example shows how to create dbf files and write records in to it.

Also you can update a dbf file record using dbf module.

The below example shows how to update a record in a .dbf file.

 

AWS Cognito Configurations

Introduction

Cognito is the AWS solution for managing user profiles, and Federated Identities help keep track of your users across multiple logins. Integrated into the AWS ecosystem, AWS Cognito opens up a world of possibility for advanced front end development as Cognito+IAM roles give you selective secure access to other AWS services.

Go to AWS Cognito on the AWS console to get started!

AWS console

Initial Setup — Cognito

AWS Cognito
AWS Cognito

We will be setting up AWS Cognito, which is a custom login pool (such as login with email). Cognito IS NOT a login manager for any type of login (such as Facebook and Gmail), only for custom logins.

Let’s first make a user pool by clicking on “Manage your User Pools”. A user pool is a group of users that fulfill the same designation. The setup screen should look like this:

User Pool Name
User Pool Name

We’re gonna walk through this process step by step, so enter the Pool name of “App_Users” and click “Step through settings”. The next step is “Attributes”, where we define the attributes that our “App_Users” will have.

User Attributes
User Attributes

We now, we only want to have an email, password and “agentName”. The email is our unique identifier for a user and the password is a mandatory field (which is why you don’t see it in the list of standard attributes). We want users to be able to have a codename to go by, so let’s set up “agentName” is a custom attribute. We are only using “agentName” to show how to add custom attributes. Scroll down and you will see the option to add custom attributes.

Custom Attributes
Custom Attributes

As of the date this tutorial was written, you cannot go back and change the custom attributes (even though AWS appears to be able to), so be sure to get this right the first time! If you need to change attributes, you will have to create a new user pool. Hopefully AWS fixes this issue soon. Anyways, moving on to account policies!

Account Policies
Account Policies

So we can see here that our passwords can be enforced to require certain characters. Obviously requiring a mix of various character types would be more secure, but users often don’t like that. For a middle ground, lets just require the password to be 8+ characters in length, and include at least 1 number. We also want users to be able to sign themselves up. The other parts are not so important, so let’s move onto the next step: verifications.

Account Verifications
Account Verifications

This part is cool, we can easily integrate multi-factor authentication (MFA). This means users must sign up with an email as well as another form of authentication such as a phone number. A PIN would be sent to that phone number and the user would use it to verify their account. We won’t be using MFA in this tutorial, just email verification. Set MFA to “off” and check only “Email” as a verification method. We can leave the “AppUsers-SMS-Role” (IAM role) that has been filled in, as we won’t be using it but may use it in the future. Cognito uses that IAM role to be authorized to send SMS text messages used in MFA. Since we’re not using MFA, we can move on to: Message Customizations.

Custom Account Messages
Custom Account Messages

This part is cool, we can easily integrate multi-factor authentication (MFA). This means users must sign up with an email as well as another form of authentication such as a phone number. A PIN would be sent to that phone number and the user would use it to verify their account. We won’t be using MFA in this tutorial, just email verification. Set MFA to “off” and check only “Email” as a verification method. We can leave the “AppUsers-SMS-Role” (IAM role) that has been filled in, as we won’t be using it but may use it in the future. Cognito uses that IAM role to be authorized to send SMS text messages used in MFA. Since we’re not using MFA, we can move on to: Message Customizations.

Custom Account Messages
Custom Account Messages

When users receive their account verification emails, we can specify what goes into that email. Here we have made a custom email and programmatically placed in the verification PIN represented as {####}. Unfortunately we can’t pass in other variables such as a verification link. To accomplish this, we would have to use a combination of AWS Lambda and AWS SES.

SES (Simple Email Service)
SES (Simple Email Service)

Next click “Verify a New Address”, and enter the email you would like to verify.

Now login to your email and open the email from AWS. Click the link inside the email to verify, and you will be redirected to the AWS SES page again. You have successfully verified an email! That was easy.

Now that’s done, let’s return back to AWS Cognito and move on to: Tags.

User Pool Tags
User Pool Tags

It is not mandatory to add tags to a user pool, but it is definitely useful for managing many AWS services. Let’s just add a tag for ‘AppName’ and set it to a value of ‘MyApp’. We can now move on to: Devices.

Devices
Devices

We can opt to remember our user’s devices. I usually select “Always” because remembering user devices is both free and requires no coding on our part. The information is useful too, so why not? Next step: Apps.

Apps
Apps

We want certain apps to have access to our user pool. These apps are not present anywhere else on the AWS ecosystem, which means when we create an “app”, it is a Cognito-only identifier. Apps are useful because we can have multiple apps accessing the same user pool (imagine an Uber clone app, and a complimentary Driving Test Practice App). We will set the refresh token to 30 days, which means each login attempt will return a refresh token that we can use for authentication instead of logging in every time. We un-click “Generate Client Secret” because we intend to log into our user pool from the front end instead of back end (ergo, we cannot keep secrets on the front end because that is insecure). Click “Create App” and then “Next Step” to move on to: Triggers.

Triggers
Triggers

We can trigger various actions in the user authentication and setup flow. Remember how we said we can create more complex account verification emails using AWS Lambda and AWS SES? This is where we would set that up. For the scope of this tutorial, we will not be using any AWS Lambda triggers. Let’s move on to the final step: Review.

Review
Review

Here we review all the setup configurations we have made. If you are sure about this info, click “Create Pool” and our Cognito User Pool will be generated!

Take note of the Pool Id us-east-1_6i5p2Fwao in the Pool details tab.

Notice the Pool Id
Notice the Pool Id

And the App client id 5jr0qvudipsikhk2n1ltcq684b in the Apps tab. We will need both of these in our client side app.

Notice the App client id
Notice the App client id

Now that Cognito is set up, we can set up Federated Identities for multiple login providers. In this tutorial we do not cover the specifics of FB Login as it is not within in the scope of this tutorial series. However, integrating FB Login is super easy and we will show how it’s done in the below section.

Initial Setup — Federated Identities

AWS Cognito
AWS Cognito

Next we want to setup “Federated Identities”. If we have an app that allows multiple login providers (Amazon Cognito, Facebook, Gmail..etc) to the same user, we would use Federated Identities to centralize all these logins. In this tutorial, we will be using both our Amazon Cognito login, as well as a potential Facebook Login. Go to Federated Identities and begin the process to create a new identity pool. Give it an appropriate name.

create a new identity pool
Create a new identity pool

Now expand the “Authentication providers” section and you will see the below screen. Under Cognito, we are going to add the Cognito User Pool that we just created. Copy and paste the User Pool ID and App Client ID that we made note of earlier.

Authentication providers
Authentication providers

And if we wanted Facebook login for the same user identity pool, we can go to the Facebook tab and simply enter our Facebook App ID. That’s all there is to it on the AWS console!

Facebook tab
Facebook tab

Save the identity pool and you will be redirected to the below screen where IAM roles are created to represent the Federated Identity Pool. The unauthenticated IAM role is for non-logged in users, and the authenticated version is for logged in users. We can grant these IAM roles permission to access other AWS resources like S3 buckets and such. That is how we achieve greater security by integrating our app throughout the AWS ecosystem. Continue to finish creating this Identity Pool.

IAM roles
IAM roles

You should now see the below screen after successfully creating the identity pool. You now only need to make note of 1 thing which is the Identity Pool ID (i.e. us-east-1:65bd1e7d-546c-4f8c-b1bc-9e3e571cfaa7) which we will use later in our code. Great!

Sample code
Sample code

Exit everything and go back to the AWS Cognito main screen. If we enter the Cognito section or the Federated Identities section, we see that we have the 2 necessary pools set up. AWS Cognito and AWS Federated Identities are ready to go!

AWS Cognito
AWS Cognito
AWS Federated Identities
AWS Federated Identities

That’s all for set up! With these 2 pools we can integrate the rest of our code into Amazon’s complete authentication service and achieve top tier user management.

FreeSWITCH status on LED display using socket connection

It is a simple experiment to show  FreeSWITCH  status on LED display using socket connection. Here is Video :

What You Need

1.Raspberry pi-3

2.MAX-7219 based 8×8 LED Matrix Displays(4.No’s or more).

Those available in kit form and assembled form. And we can purchase through on- line marketing like Amazon etc.

In my case 4 modules are powered from GPIO pins of Raspberry . It is good to use separate power for modules for more than 2 modules.

3.Female to Female connector wires

to connect GPIO pins and MAX7219 LED modules.

Next What to do(installing FreeSWITCH)

1.Prepare SD card and load Raspbian and install FreeSWITCH.  For details

https://www.algissalys.com/how-to/freeswitch-1-7-raspberry-pi-2-voip-sip-server

2.Install Display drivers for MAX7219. 

git clone https://github.com/rm-hull/max7219.git
sudo python max7219/setup.py install

3.Do wiring.

(as given below) between GPIO of Raspberry pi and MAX 7219 matrix LED displays.

Pin        Name       Remarks            RPi Pin          RPi Function

1            Vcc          +5V Power              2                        5V0

2            Gnd           Ground                  6                        Gnd

3            DIN            Data In                 19                GPIO 10 (MOSI)

4             CS          Chip Select              24                 GPIO  8 (SPI CS0)

5            CLK           Clock                      23                GPIO 11 (SPI CLK)

4.Run demo program.

Edit matrix_demo.py according to no. of matrix devices used  i.e cascaded= n, in my case n=4.

device = max7219(serial, cascaded=4 or 1, block_orientation=block_orientation).

sudo python max7219/examples/matrix_demo.py

At Last

Use ESL connection between FreeSWITCH and Max7219demo program. For details

https://freeswitch.org/confluence/display/FREESWITCH/Python+ESL

Here is my source file.

 

 

How does email work ?

Most of people in this generation would spend their time daily to send and receive emails. E-mail plays a vital role in our daily activities. As technology evolved email became the one of major communication tool. It may not cost you to send or receive a mail but there is an intricate process involved behind. You just need to press  send button to send your message, your message goes through the complex machanism to get received by recipient.

The invention of email was started in 1961. There were more than couple of people to list as the inventors of email.

The evolution of email started from messaging users on the same computer, then message transition between computer then message among multi user and multi computer, finally email. Email has became revolutionary tool for communication. Now a days it is part of our daily life. We will see how email really works.

Terminology

There are different components involved in the email system. There are few abbreviated terms. Having idea about terminology and abbreviations  will help you to better understand the system.

MSA Mail Submission Agent
MTA Mail Transfer Agent
MDA Mail Delivery Agent
MX Mail Exchange
DNS Domain Name System

MUA is the software that user will use either to send or retrieve  the mail(message) from the server.

MSA is the piece of software which is installed on mail server. Where it is responsible to transfer the message to destination or mail server called MTA (Mail Transfer Agent)

MTA  is the Mail Transfer Agent. It is the piece software on the server which is responsible to route the mail to destination mail server. So we called it as mail router, mail server etc.                                                                        Here are few popular  MTA(Mail Server) softwares : postfixqmailCourier Mail and sendmail.

postfix is the one which is widely used, and it comes with many linux distributions.

You can find the exhaustive mail server softwares list here

Protocols involved

There are different protocol involved in email system. All of them are required to get email delivered to the recipient. They are building blocks of email system. Those protocols are,

SMTP Simple Mail Transport Protocol
IMAP Internet Message Access Protocol
POP Post Office Protocol
DNS Domain Name System (Protocol)

How does email work?

This sections will get you the idea about how these protocols work together to deliver the mail to recipient. Here is the abstract overview of email system.

How does email work abstract
How does email work abstract

The above figure will give you simple abstract overview of email system.  As described in the figure. SMTP is the protocol which is used to push or send an email to server by sender. IMAP and POP are the protocols which are used to check or retrieve message from server by recipient.  The recipient MUA is configured to use either IMAP or POP or BOTH (IMAP & POP). The protocol IMPA is bidirectional where POP is unidirectional. We will see more about POP and IMAP in later section.

        As per the above figure(abstract overview) you can sense that, the sender email client(MUA) will send message using SMPT protocol to mail server (MTA). The mail server will check for the destination if it finds the one it will connect to destination mail server, that is other MTA/MDA and will pass the message(mail) using SMTP protocol. The recipient server will store the received mail locally. Later if recipient will check the mail by using his mail client by connecting to his mail server.

Now let’s see flow of email in detail with following figure

Here Alice is sending  email to Bob(bob@domain.com) by using her email client. Where she will push her message to server using SMTP to send Bob. The mail server will determine the destination by getting the MX record for the destination server. The Alice mail server will check for the domain after in the to(recipient)  mail address to get the MX record. Once Alice(sender) mail server received the result from DNS for the DNS query to get MX record, this server will connect to destination mail server and will delivery the mail (message) using SMTP protocol. The destination mail server will store the message locally. Bob will check the newly arrived mails by using his mail client.

I would like to illustrate this process even more by using following figure.

In the above figure there are new components introduced, those are MSA, MDA and extra MX servers.

       MSA is the Mail Submission agent where it is piece of software it will receive the message from MUA. It uses the same protocol SMTP and port 25. Practically most MTAs perform the function of MSA so you may assume MTA as MSA.

       The MDA means mail delivery agent or message delivery agent  is a computer software component that is responsible for the delivery of e-mail messages to a local recipient’s mailbox. You may not need to get confused, most of MTAs perform the MDA functionality as well though there are softwares which are only designed to work as MDA.

Here, you can see that practically there will be more than 1 mail server(MX SERVER)s. All other MX servers are for backup purpose. When sender query DNS for MX record it may get more that one MX server with priorities. Here is the sample output of DNS query for gmail MX records. Gmail is having 5 MX servers with different priorities.

The above result is the out put of command line utility dig. Where we use this utility to query the DNS. In the output you would see something like below in the right side of  ANSWER SECTION.

Here you would see some numbers in front of domains. Those numbers will decide the priority of MX record. The domain which is having very lowest number associated will have highest priority. Here gmail-smtp-in.l.google.com MX record having highest priority cause it having lowest number associated, that is 5. So the sender MTA will try to connect with most prioritized MX first. If it is down or so, it will try to connect with next MX (Mail Exchange server). As a result mail will be delivered with no down time.

Once the mail is received by the destination mail server, it will store the message in the mail store. There are two types of mail stores used by various mail servers(softwares). Those are,

  • Mail DIR
  • Mail Box

Locally stored mail will be accessed(fetched) by the recipient client using either POP or IMAP.

Protocols POP and IMAP

POP and IMAP are application layer protocols, as described above POP and IMAP are the protocols which are used to fetch or access the mails by recipient email client (MUA). Both protocols are different, they server for different purpose.

The protocol POP means Post Office Protocol and POP3 is it’s  version 3.  As the name describes, this protocol is used to download the message from the server by the client. Once message is downloaded form the server, it will be removed unless you set the flag leave a copy on server just how post card is delivered to destination. If you are only the one who will access mailbox from one location the POP3 suites well. This will also save some memory on server. So, if use POP3 you can’t access the mail using different clients.

Unlike POP3 the IMAP won’t download the message by deleting it on server. It will just access the message like browser does webpages. So it is handy if you use multiple clients from different locations.

 

References:

[1] https://tools.ietf.org/html/rfc3501
[2] https://www.ietf.org/rfc/rfc1939.txt
[3] https://tools.ietf.org/html/rfc5321

InlineCallbacks

Twisted features a decorator named inlineCallbacks which allows you to work with deferreds without writing callback functions.

This is done by writing your code as generators, which yield deferreds instead of attaching callbacks.

Consider the following function written in the traditional deferred style:

using inlineCallbacks, we can write this as:

from twisted.internet.defer import inlineCallbacks

Instead of calling addCallback on the deferred returned by redis.Connection, we yield it. this causes Twisted to return the deferred‘s result to us.

Though the inlineCallbacks looks like synchronous code, which blocks while waiting for the request to finish, each yield statement allows other code to run while waiting for the deferred being yielded to fire.

inlineCallbacks become even more powerful when dealing with complex control flow and error handling.

txRedis

txredis is a non-blocking client for the redis database, written in Python. It uses twisted for the asynchronous communication with redis.

Install

Now to check if txredis is properly installed or not goto python prompt and import txredis :

Example :

Using redis-cli, add list of user’s in redis-server with hmset :

hmset, set key to value within hash name for each corresponding key and value from the mapping dict.

Now, using python twisted get user’s :

Above code do connection with redis-server.

Here, rediscreator.connectTCP return defer and register appropriate method in addCallback, addErrback

Here, define callback function onRedisConnect and errRedisConnect.

Here, hget return the value of key within the hase name and return defer

Here, define callback function onSuccess and onFailur and log the appropriate result.

It give output 123, it is correspond to user abc

WebRTC

WebRTC provides Real-Time Communications directly from better web browsers and devices without requiring plug-ins such as Adobe Flash nor Silverlight.

FreeSWITCH is a WebRTC gateway because it’s able to accept encrypted media from browsers, convert it, and exchange it with other communication networks  that use different codecs and encryptions, for example, PSTN, mobile carriers, legacy systems, and others. FreeSWITCH can be a gateway between your SIP network and applications and billions of browsers on desktops, tablets, and smartphones.

Configuration :

Look for the following in sofia profile and uncomment them:

Clients :

How it works :

By default, Sofia will listen on port 7443 for WSS clients. You may want to change this port if you need your clients to traverse very restrictive firewalls. Edit /usr/local/freeswitch/conf/sip-profiles/internal.xml and change the “wss-binding” value to 443. This number, 443, is the HTTPS (SSL) port, and is almost universally open in all firewalls.Remember that if you use port 443 for WSS, you cannot use that same port for HTTPS, so you will need to deploy your secure web server on another machine.

Example :

SIP signaling in JavaScript with SIP.js (WebRTC client)

Let’s carry out the most basic interaction with a web browser audio/video through WebRTC. We’ll start using SIP.js.

A web page will display a click-to-call button, and anyone can click. That call will be answered by our company’s PBX and routed to our employee extension (1000). Our employee will wait on a browser with the “answer” web page open, and will automatically be connected to the incoming call.

call.html :

call.js :

answer.html :

answer.js :

How it works :

Our employee (the callee, or the person who will answer the call) will sit tight with the answer.html web page open on their browser.

Our customer (the caller, or the person who initiates the communication) will visit the call.html webpage and then click on the Start Call button. This clicking will activate the JavaScript that creates the communication session using the invite method of the user agent, passing as an argument the SIP address of our employee.

use this to see if ws and wss work :

How Implement Multiservice in Twisted.

Multiservice module is service collection provided by twisted, which is useful for creating a new service and combines with two or more existing services.

The major tools that manages Twisted application is a command line utility called twistd. twistd is a cross-platform, and is the recommended tool for running twisted applications.

The core component of the Twisted Application infrastructure is the

object. which represents your application. Application acts as a container of any “Services” that your application provides. This will be done through Services.

Services manages application that can be started and stopped. In Application object can contain many services, or can even hierarchies of Services using “Multiservice” or your own custom IServiceCollection implementations.

Multiservice Implementaion:

To use multiserivce, which implements IService. For this, import internet and service module.

 

Example :

To run, Save above code in a file as serviceexample.tac . Here, “tac ” file is regular python file. Twisted application infrastructure, protocol implementations live in a module, services, using those protocols are registered in a Twisted Application Configuration(TAC) file and the reactor and configuration are managed by an external utility.

Here, I use multiservice functionality from service. agentservice create object of multiservice. Then add services using add service method. In service, you can add web servers, FTP servers and SSH clients. After this, set application name and pass application to serviceparent method.

now, add service on port 8082 as :

add another service same as above on port 8083 as:

To run serviceexample.tac file using twistd program, use command twistd -y serviceexample.tac -n. After this, open browser and enter url localhost:8082 and localhost:8083. You can see result on web page and both TCP servers are active.

Compile C program using gcc in Linux

“This post explains about using gcc to compile C program on Linux”

Compile  C program using gcc:

  • What is a compiler:

Compiler is just like translator between programing language and machine language. It converts source written in programing language to executable instructions file for computer. For different programing languages different compilers are available. Compilers differs from operating system to operating system.

  • Open text editor:

Compiling C program start with a text editor to write our C program like VI . It is generally inbuilt in Linux operating systems. By opening terminal in our system we start from there.screenshot-from-2016-12-05-13-14-11

 

 

 

 

  • Write code:

In terminal type: vi sample.c  and Enter key. Then we enter in to vi text editor with our filename given. Now type a or i key to go in to insert mode. Then type our C program in it. After typing C program press Esc key,colon(:),w,q and Enter key  respectively to save and exit from VI. Here is figure showing source code.

screenshot-from-2016-12-05-13-18-02

  • Compile using gcc command:

Now we are in terminal again.Here we type ls command to see our saved file in the list. Then type gcc sample.c -o sample and enter key. Now the gcc compiler compiles our C file and gives the output as filename we given that is executable. screenshot-from-2016-12-05-13-20-29

  • Execute output file:

If there are any mistakes or errors in the program the compiler gives warnings and error messages with line number to find out them easily, after correcting them compile once again. If it compiles successfully it gives executable file.  To check that file we use ls command and see if it is. If it is, now type ./sample in terminal to execute it. we see the result on terminal.screenshot-from-2016-12-05-13-22-54

Actually in compiling process preprocessor adds the necessary files those are included in C libraries. That we are listed in first of our program like <stdio.h> and some other files also generated by compiler . One of  those file is object file.