Tuesday, 17 December 2013

Infinite Snowflake

So it's close to the Christmas Holiday season, so I thought I would spend some time writing something seasonal.   My son, who is rather mathematically minded, was asking me about finding reoccurring digits in reciprocals of primes, for example 1/7, 1/11, 1/13 etc., and somehow this got me looking at the binary digits of Pi, and after a little more browsing around wikipedia onto the Thue-Morse binary sequence.

The Thue-Morse binary sequence starts with zero and one successively appends to the existing sequence the boolean compliment of the sequence so far.   One interesting feature is that a turtle graphics program can be written to control the turtle by feeding it the successive digits from the sequence so that:
  • A zero moves the cursor forward by a step
  • A one turns the cursor anti-clockwise by 60 degress (or pi/3 radians)
 ..and a Koch snowflake is drawn.  So, I've implemented this in C and obfuscated it a little just for fun:

#include <math.h>
#define  Q (1<<9)
#define M /**/for
#define E(b,a) {\
J q=1^(X+b)[3+Q*\
int /*/*/ typedef
#define  W double
#define  z (Q<<5)

       J;J X[3+(Q              *Q)]
     ,t[z       ] ;J          main(
    ){ W         o= Q        >>1,V=
   o/5,           a=0;      M (1 [X
   ]=1;           X[1]         <z;X
  [1 ]             *=2)        for(
  X[2]             =0;X        [2]<
  X[1]             ;X[2        ]++,
  t[X[             2]+X        [1]]
  =1^t             [X[2        ]]);
  for(             0[X]        =0;X
  [0]<             3;0[        X]++
   )for           (X[2]        =0;X
   [2]<           z;X[         2]++
    )if(         t[X[          2]])
     a-=(       M_PI           /3.0
       );else{o+=          cos(a ); V+=

sin(a);X[3+( int)o+Q*(int)V]|=1;}printf(
"P6 %d %d 1 ",Q,Q);M(1[X] =0;X[1]<Q;X[1]
++)M(2[X]=0;X[ 2]<Q;X[2]++)E(2[X],X[1]);
return 0;} /* Colin Ian King 2013    */

To build and run:
gcc koch-binary.c -lm -o koch-binary 
./koch-binary | ppmtojpeg > koch-binary.jpg
And the result is a koch-snowflake:

The original source code can be found here. It contains a 512 x 512 turtle graphics plotter, a Thue-Morse binary sequence generator and a PPM output backend. Anyway, have fun figuring out how it works (it is only mildly obfuscated) and have a great Christmas!

Sunday, 15 December 2013

Detecting System Management Interrupts

System Management Mode (SMM) is a special operating mode on x86 processors that temporarily jumps from normal execution and executes specialised firmware code in a high privilege before returning back.  SMM is entered via the System Management Interrupt (SMI) and is intented to work transparently to the operating system.

For example, SMM can be used to handle shutdown if CPU temperature is too high, perform transparent fan control, handle special system events (e.g. chipset errors), emulate hardware (non existing or buggy hardware) and a lot more besides.

SMM in theory cannot be disabled by the operating system and have been known to interfere with the operating system even though is it meant to be transparent.  SMIs steal CPU cycles from the system - CPU state has to be stored and restored and there are side effects because of flushing out of the write back cache.  This CPU cycle stealing can impact real time behaviour and in the past it has been hard to determine how frequently SMIs occur and hence how much potential disruption they bring to a system.

When the CPU enters SMM the output pin SMIACT# is asserted (and all further memory cycles are redirected to a protected memory for SMM).  Hence one could use a logic analyser on SMIACT# to count SMIs.   An alternative is to have a non-interruptible thread on a CPU checking time skips by constantly monitoring the Time Stamp Counter (TSC) but this is a CPU expensive operation.

Fortunately, modern Intel CPUs (such as Ivybridge and Haswell) have a special SMI counter in a Model Specific Register. MSR_SMI_COUNT (0x00000034) is incremented when SMIs occur, allowing easy detection of SMIs.

As a quick test, I hacked up smistat that polls MSR_SMI_COUNT every second.  For example, pressing the backlight brightness keys on my Lenovo laptop bumps the counter and this is easy to see with smistat.   So this MSR provides some indication of the frequency of SMIs, however, it of course cannot  inform us how many CPU cycles are stolen in SMM.   Now that would be a very useful MSR to add to the next revision of the Intel silicon...

Wednesday, 11 December 2013

fwts 13.12.00 released

Version 13.12.00 of the Firmware Test Suite has been released today.  This latest release includes some of the following new features:
  • ACPI method test, add _IPC, _IFT, _SRV tests
  • Update to version 20131115 of ACPICA
  • Check for thermal overrun messages in klog test
  • Test for CPU frequency maximum limits on cpufreq test
  • Add Ivybridge and Haswell CPU specific MSR checks
  • UEFI variable dump:
    • add more subtype support on acpi device path type  
    • handle the End of Hardware Device Path sub-type 0x01
    • add Fibre Channel Ex subtype-21 support on messaging device path type
    • add SATA subtype-18 support on messaging device path type
    • add USB WWID subtype-16 support on messaging device path type
    • add VLAN subtype-20 support on messaging device path type
    • add Device Logical Unit subtype-17 support on messaging device path typ
    • add SAS Ex subtype-22 support on messaging device path type
    • add the iSCSI subtype-19 support on messaging device path type
    • add the NVM Express namespace subtype-23 support on messaging device path type
..and also the usual bug fixes. 

For more details, please consult the release notes
The source tarball is available at:  http://fwts.ubuntu.com/release/fwts-V13.12.00.tar.gz and can be cloned from the repository: git://kernel.ubuntu.com/hwe/fwts.git

The fwts 13.12.00 .debs are available in the firmware testing PPA and will soon appear in Ubuntu Trusty 14.04

And thanks to Alex Hung, Ivan Hu, Keng-Yu Lin their contributions and keen eye for reviewing the numerous patches and also to Robert Moore for the on-going work on ACPICA.

Tuesday, 10 December 2013

Digging into TRIM

Enabling the discard mount option to perform auto trimming on a SSD has some downsides.  The non-queued trim command can incur a large performance penalty on deletes; the driver needs to finish pending operations, then peform the trim command before it can resume normal operation again.  Queued TRIM support has been added to the 3.12 kernel which will help improve this issue, however, one requires hardware that supports the Serial ATA revision 3.1 Queued Trim command.

Trimming can trigger internal garbage collection on some firmware and causing some performance impact when one leasts expects it, so enabling TRIM via the discard mount flag may be suboptimal.  It is therefore better to batch up discards and run fstrim periodically when a machine is idle in terms of minimising performance impact (this is also more power efficient too).

For the curious, how much is being discarded when one runs fstrim? Fortunately the kernel has a tracepoint added to ext4_trim_extent and this enables one to observe the block device, allocation group, starting block of the free extent in the allocation group and number of blocks to TRIM.

With debugfs mounted one can easily observe this using:

 echo 1 | sudo tee /sys/kernel/debug/tracing/events/ext4/ext4_trim_extent/enable  
 sudo cat /sys/kernel/debug/tracing/trace_pipe | grep ext4_trim_extent  

then invoke fstrim from another terminal and one will observe:

fstrim-4909  [003] ....  6585.463841: ext4_trim_extent: dev 8,3 group 1378, start 2633, len 829
fstrim-4909  [003] ....  6585.474926: ext4_trim_extent: dev 8,3 group 1378, start 3463, len 187
fstrim-4909  [003] ....  6585.484014: ext4_trim_extent: dev 8,3 group 1378, start 3652, len 474

The deeper question is therefore how frequently should run fstrim?  It is hard to say exactly, since it is use-case dependant.  However, anecdotal evidence seems to suggest running fstrim either daily or weekly.

Friday, 22 November 2013

Finding spelling mistakes in source code

There are quite a few useful open source utilities around for finding spelling mistakes in source code.  Most recently I've been using codespell which works well for me.

Codespell is regularly being updated and comes with a dictionary originally derived from Wikipedia. I normally pull the latest updates from the repository before running it against my source code.

Fetching it and using it is relatively simple:
 git clone git://git.profusion.mobi/users/lucas/codespell  
 cd your-project-dir  
..and it will find common spelling mistakes in the entire project directory. Easy!

Tuesday, 29 October 2013

Making software successful

What makes good software into excellent usable software?  In my opinion software that fails to be excellent software generally lacks some key features:


A README file in the source is not sufficient for the end-user.  Good quality documentation must explain how to use all the key features in a coherent manner.  For example, a command line tool should be at least shipped with a man page, and this must cover all the options in the tool and explain clearly how each option works.

Documentation must be kept up to date with the features in the software.  If a feature is added to a program and the documentation has not been updated then the developer has been careless and this is a bug. Out of date documentation that provides wrong and useless information make a user lose faith in the tool.

Try to include worked examples whenever possible to provide clarity. A user may look at the man page, see a gazillion options and get lost in how to use a tool for the simple use-case.  So provide an example on how to get a user up and running with the simple use-case and also expand on this for the more complex use cases.  Don't leave the user guessing.

If there are known bugs, document them. At least be honest with the user and let them know that there are issues that are being worked on rather than let them discover it by chance.

Expect quality when developing

Don't ship code that is fragile and easily breaks. That puts users off and they will never come back to it.  There really is no excuse for shipping poor quality code when there are so many useful tools that can be used to improve quality with little or no extra cost.

For example, develop new applications with pedantic options enabled, such as gcc's -Wall -Wextra options.   And check the code for memory leaks with tools such as valgrind and gcc's mudflap build flags.

Use static code analysis with tools such as smatch or Coverity Scan.  Ensure code is reviewed frequently and pedantically.  Humans make mistakes, so the more eyes that can review code the better.   Good reviewers impart their wisdom and make the process a valuable learning process.

Be vigilant with all buffer allocations and inputs.  Use gcc's -fstack-protector to check for buffer overflows and tools such as ElectricFence.  Just set the bar high and don't cut corners.

Write tests.  Test early and often. Fix bugs before adding more functionality.  Fixing bugs can be harder work than adding new shiny features which is why one needs to resist this temptation and focus on putting fixes first over features.

Ensure any input data can't break the application.  In a previous role I analysed an MPEG2 decoder and manually examined every possible input pattern in the bitstream parsing to see if I could break the parser with bad data.  It took time, it required considerable effort, but it ended up being a solid piece of code.  Unsanitised input can break code, so be careful and meticulous.

Sane Defaults

Programs have many options, and the default modus operandi of any tool should be sane and reasonable.   If the user is having to specify lots of extra command line options to make it do the simplest of tasks then they will get frustrated and give up. 

I've used tools where one has to set a whole bunch of obtuse environment variables just to make it do the simplest thing.  Avoid this. If one has to do this, then please document it early on and not buried at the end of the documentation.

Don't break backward compatibility

Adding new features is a great way to continually improve a tool but don't break backward compatibility.  A command line option that changes meaning or disappears between releases is really not helpful.     Sometimes code can no longer be supported and a re-write is required.  However, avoid rolling out replacement code that drops useful and well used features that the user expects.

The list could go on, but I believe that if one strives for quality one will achieve it.  Let's strive to make software not just great, but excellent!

Friday, 27 September 2013

health-check revisited

Earlier this month I wrote about health-check, a tool to sanity check application resource usage.  Since then I've re-worked and simplified the system call tracing and added some new features.

Originally, health-check was designed to attach itself to running processes. To make the tool even easier to use it can now start applications and follow any subsequent new processes spawned off from fork() or clone(). For example:
sudo health-check -u joeuser -f thunderbird
..will start thunderbird (running as user "joeuser") and follow any new processes it creates.

Sometimes applications will force flushing of data or metadata to disk by excessive use of fsync(), fdatasync() and sync().  Health-check will now keep track of these system calls and provide feedback on their use.

Health-check already has some memory checking techniques - however, it now has been extended to check explicitly for heap changes by examining the brk() system call and also keeping track of mmap() and munmap() mappings.  This allows better tracking of potential memory leaks.

Source code can be found at: git://kernel.ubuntu.com/cking/health-check

Packages found in my White PPA in ppa:colin-king/white so to install on Ubuntu systems use:
 sudo add-apt-repository ppa:colin-king/white  
 sudo apt-get update  
 sudo apt-get install health-check  

Thursday, 5 September 2013

health-check: a tool to diagnose resource usage

Recently I have been focused on ways to reduce power consumption on Ubuntu phones. To identify resource hungry issues one can use tools such as top, strace and gdb, or inspect per process properties in /proc/$pid, however this can be time consuming and not practical with many processes in a system. Instead, I decided it would be profitable to write a tool to inspect and monitor a running program and report on areas where it appeared to be sub-optimal.  And so health-check was born.

One provides health-check with a list of one or more processes and it will monitor all the associated threads (and child processes) and then report back on the resources used. Heath-check will report on:
  • CPU utilisation
  • Wakeup events
  • Context Switches
  • File I/O operations (Open/Read/Write/Close using fnotify)
  • System calls (using ptrace)
  • Analysis of polling system calls
  • Memory utilisation (including memory growth)
  • Network connections
  • Wakelock activity
CPU utilisation, wakeup events and context switches are useful just to see how much CPU related activity is occurring.   For example,  a program that has multiple threads that frequently ping-pongs between them will have a high context switch rate, which may indicate that it is busy passing data or messages between threads.  A process may be frequently polling on short timer delays may show up as generating a high level of wakeup events.

Some applications may be sub-optimally writing out data frequently, causing dirty pages and meta data that needs to be written back to the file system.  Health-check will capture file I/O activity and report on the names of the files being opened, read, written and closed.

To help identify excessive or heavy system call usage, health-check uses ptrace to trap and monitor all the system calls that the program makes.  For example, it has been observed that some applications excessively call poll() and nanosleep() with poorly chosen timeouts causing excessive CPU utilisation.  For system calls such as these where they can wait until an event or a timeout occur, health-check has some deeper monitoring.  It inspects the given timeout delay and checks to see if the call timed out, for example,  health-check can identify CPU sucking repeated polling where zero timeouts are being used or excessive nanosleeps with zero or negative delays.

The ptrace ability of heath-check also allow it to monitor per-process wake lock writes. Abuse of wakelocks can keep the a kernel from suspending into deep sleep so it is useful to keep track of wakelock activity on some processes. This is not enabled by default as it is an expensive operation to monitor this via ptrace and also some kernels may not have wakelocks, so one has to use the -W option to enable this.

Health-check also inspects /proc/$pid/smaps and will determine if memory utilisation has grown or shrunk.  Unusually high heap growth over time may indicate that an application has a memory leak.

Finally, health-check will inspect /proc/$pid/fd and from this determine any open sockets and then try and resolve the host names of the IP addresses.  For example, it is entirely possible for an application to be making spurious or unwanted connections to various machines, so it is helpful to check up on this kind of activity.

Health-check is still very early alpha quality, so beware of possible bugs.   However, it has been helpful in identifying some misbehaving applications, so it is already proving to be rather useful.

Source code can be found at: git://kernel.ubuntu.com/cking/health-check

Packages found in my White PPA in ppa:colin-king/white so to install on Ubuntu systems use:

 sudo add-apt-repository ppa:colin-king/white  
 sudo apt-get update  
 sudo apt-get install health-check  
..and go and track down some resource sucking apps..

Wednesday, 7 August 2013

fwts 13.08.00 released.

Version 13.08.00 of the Firmware Test Suite has been released today.  This latest release includes the following new features:

* uefirtvariable - can now specify number of iterations for high duration soak testing,
* new --acpica option to turn enable core ACPICA functionality (such as ACPI slack mode and serialized AML execution),
* sync klog scan patters with Linux 3.10,
* klog test now parses firmware related kernel warning messages,
* JSON log output now supports "pretty" formatting,
* update to ACPICA version 20130725
* SMBIOS and dmi_decode tests merged into a new dmicheck tests,

..and also the usual bug fixes.  We have been using Coverity Scan and this has helped to improve the quality of the code in the recent releases.

For more details, please consult the release notes
The source tarball is available at:  http://fwts.ubuntu.com/release/fwts-V13.08.00.tar.gz and can be cloned from the repository: git://kernel.ubuntu.com/hwe/fwts.git

The fwts 13.08.00 .debs are available in the firmware testing PPA and will soon appear in Ubuntu Saucy 13.10.

Thanks to Alex Hung, Ivan Hu, Keng-Yu Lin and Zhang Rui for their contributions and also to Robert Moore for the on-going work on ACPICA.

Monday, 15 July 2013

Firmware Test Suite Portal

The Firmware Test Suite (fwts) portal page is the first place to visit for all fwts related links.   It has links to:
  • Where to get the latest source code (git repository and tarballs)
  • PPAs for the latest and stable packages
  • Release notes (always read these to see what is new!)
  • Reference Guide / Documentation
  • How to report a bug (against firmware or fwts)
  • Release schedule, cadence and versioning
Thanks to Keng-Yu Lin for setting this up.

Wednesday, 3 July 2013

fwts 13.07.00 released.

Version 13.07.00 of the Firmware Test Suite has been released today.  This latest release includes the following new features:

* A new uefivarinfo utility to show UEFI NVRAM variable usage
* Use the latest version of ACPICA (version 20130626)
* SMBIOS test now performs some more sanity checks
* kernel log test now scans for lpc_ich warnings
* Add the --acpica-debug option (for debugging purposes only)

..and also a bunch of bug fixes.

For more details, please consult the release notes
The source tarball is available at:  http://fwts.ubuntu.com/release/fwts-V13.07.00.tar.gz and can be cloned from the repository: git://kernel.ubuntu.com/hwe/fwts.git

The fwts 13.07.00 .debs are available in the firmware testing PPA and will soon appear in Ubuntu Saucy 13.10.

Friday, 17 May 2013

Kernel tracing using lttng

LTTng (Linux Trace Toolkit - next generation) is a highly efficient system tracer that allows tracing of the kernel and userspace. It also provides tools to view and analyse the gathered trace data.  So let's see how to install and use LTTng kernel tracing in Ubuntu. First, one has to install the LTTng userspace tools:
 sudo apt-get update  
 sudo apt-get install lttng-tools babeltrace
LTTng was already recently added into the Ubuntu 13.10 Saucy kernel, however, with earlier releases one needs to install the LTTng kernel driver using lttng-modules-dkms as follows:

 sudo apt-get install lttng-modules-dkms  
It is a good idea to sanity check to see if the tools and driver are installed correctly, so first check to see the available kernel events on your machine:

 sudo lttng list -k  
And you should get a list similar to the following:
 Kernel events:  
    mm_vmscan_kswapd_sleep (loglevel: TRACE_EMERG (0)) (type: tracepoint)  
    mm_vmscan_kswapd_wake (loglevel: TRACE_EMERG (0)) (type: tracepoint)  
    mm_vmscan_wakeup_kswapd (loglevel: TRACE_EMERG (0)) (type: tracepoint)  
    mm_vmscan_direct_reclaim_begin (loglevel: TRACE_EMERG (0)) (type: tracepoint)  
    mm_vmscan_memcg_reclaim_begin (loglevel: TRACE_EMERG (0)) (type: tracepoint)  
Next, we need to create a tracing session:
 sudo lttng create examplesession  
..and enable events to be traced using:
 sudo lttng enable-event sched_process_exec -k  
One can also specify multiple events as a comma separated list. Next, start the tracing using:
 sudo lttng start  
and to stop and complete the tracing use:
 sudo lttng stop  
 sudo lttng destroy  
and the trace data will be saved in the directory ~/lttng-traces/examplesession-[date]-[time]/.  One can examine the trace data using the babeltrace tool, for example:
 sudo babeltrace ~/lttng-traces/examplesession-20130517-125533  
And you should get a list similar to the following:
 [12:56:04.490960303] (+?.?????????) x220i sched_process_exec: { cpu_id = 2 }, { filename = "/usr/bin/firefox", tid = 4892, old_tid = 4892 }  
 [12:56:04.493116594] (+0.002156291) x220i sched_process_exec: { cpu_id = 0 }, { filename = "/usr/bin/which", tid = 4895, old_tid = 4895 }  
 [12:56:04.496291224] (+0.003174630) x220i sched_process_exec: { cpu_id = 2 }, { filename = "/usr/lib/firefox/firefox", tid = 4892, old_tid = 4892 }  
 [12:56:05.472770438] (+0.976479214) x220i sched_process_exec: { cpu_id = 2 }, { filename = "/usr/lib/libunity-webapps/unity-webapps-service", tid = 4910, old_tid = 4910 }  
 [12:56:05.478117340] (+0.005346902) x220i sched_process_exec: { cpu_id = 2 }, { filename = "/usr/bin/ubuntu-webapps-update-index", tid = 4912, old_tid = 4912 }  
 [12:56:10.834043409] (+5.355926069) x220i sched_process_exec: { cpu_id = 3 }, { filename = "/usr/bin/top", tid = 4937, old_tid = 4937 }  
 [12:56:13.668306764] (+2.834263355) x220i sched_process_exec: { cpu_id = 3 }, { filename = "/bin/ps", tid = 4938, old_tid = 4938 }  
 [12:56:16.047191671] (+2.378884907) x220i sched_process_exec: { cpu_id = 3 }, { filename = "/usr/bin/sudo", tid = 4939, old_tid = 4939 }  
 [12:56:16.059363974] (+0.012172303) x220i sched_process_exec: { cpu_id = 3 }, { filename = "/usr/bin/lttng", tid = 4940, old_tid = 4940 }  
The LTTng wiki contains many useful worked examples and is well worth exploring.

As it stands, LTTng is relatively light weight.   Research by Romik Guha Anjoy and Soumya Kanti Chakraborty shows that LTTng describes how the CPU overhead is ~1.6% on a Intel® CoreTM 2 Quad with four 64 bit Q9550 cores.  With measurements I've made with oprofile on a Nexus 4 with 1.5 GHz quad-core Snapdragon S4 Pro processor shows a CPU overhead of < 1% for kernel tracing.  In flight recorder mode, one can generate a lot of trace data. For example, with all tracing enabled running multiple stress tests I was able to generate ~850K second of trace data, so this will obviously impact disk I/O.

Wednesday, 8 May 2013

Getting started with oprofile on Ubuntu

Oprofile is a powerful system wide profiler for Linux.  It can profile all running code on a system with minimal overhead.   Running oprofile requires the uncompressed vmlinux image, so one has to also install the kernel .ddeb images.

To install oprofile:
 sudo apt-get update && sudo apt-get install oprofile
..and then install the kernel .ddebs:
 echo "deb http://ddebs.ubuntu.com $(lsb_release -cs) main restricted universe multiverse" | \  
 sudo tee -a /etc/apt/sources.list.d/ddebs.list  
 sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 428D7C01  
 sudo apt-get update  
 sudo apt-get install linux-image-$(uname -r)-dbgsym  
 ..the installed vmlinux image can be found in /usr/lib/debug/boot/vmlinux-$(uname-r)

Oprofile is now ready to be used.  Let's assume we want to profile the following command:
 dd if=/dev/urandom of=/dev/null bs=4K  
First, before running opcontrol, one may have to stop the NMI watchdog to free up counter 0 using the following:
 echo "0" | sudo tee /proc/sys/kernel/watchdog  
Next, we tell opcontrol the location of vmlinux, separate out kernel samples, initialize, reset profiling and start profiling:
 sudo opcontrol --vmlinux=/usr/lib/debug/boot/vmlinux-$(uname -r)  
 sudo opcontrol --separate=kernel  
 sudo opcontrol --init  
 sudo opcontrol --reset  
 sudo opcontrol --start  
 ..and run the command we want to profile for the desired duration. Next we stop profiling, generate a report for the executable we are interested in and de-initialize oprofile using:
 sudo opcontrol --stop  
 sudo opreport image:/bin/dd -gl  
 sudo opcontrol --deinit  
The resulting output from opreport is as follows:
 Using /var/lib/oprofile/samples/ for samples directory.  
 warning: /kvm could not be found.  
 CPU: Intel Ivy Bridge microarchitecture, speed 2.501e+06 MHz (estimated)  
 Counted CPU_CLK_UNHALTED events (Clock cycles when not halted) with a unit mask of 0x00 (No unit mask) count 100000  
 samples %    image name        symbol name  
 55868  59.8973 vmlinux-3.9.0-0-generic sha_transform  
 14942  16.0196 vmlinux-3.9.0-0-generic random_poll  
 10971  11.7622 vmlinux-3.9.0-0-generic ftrace_define_fields_random__mix_pool_bytes  
 3977   4.2638 vmlinux-3.9.0-0-generic extract_buf  
 1905   2.0424 vmlinux-3.9.0-0-generic __mix_pool_bytes  
 1596   1.7111 vmlinux-3.9.0-0-generic _mix_pool_bytes  
 900    0.9649 vmlinux-3.9.0-0-generic __ticket_spin_lock  
 737    0.7902 vmlinux-3.9.0-0-generic copy_user_enhanced_fast_string  
 574    0.6154 vmlinux-3.9.0-0-generic perf_trace_random__extract_entropy  
 419    0.4492 vmlinux-3.9.0-0-generic extract_entropy_user  
 336    0.3602 vmlinux-3.9.0-0-generic random_fasync  
 146    0.1565 vmlinux-3.9.0-0-generic sha_init  
 133    0.1426 vmlinux-3.9.0-0-generic wait_for_completion  
 129    0.1383 vmlinux-3.9.0-0-generic __ticket_spin_unlock  
 72     0.0772 vmlinux-3.9.0-0-generic default_spin_lock_flags  
 69     0.0740 vmlinux-3.9.0-0-generic _copy_to_user  
 35     0.0375 dd            /bin/dd  
 23     0.0247 vmlinux-3.9.0-0-generic __srcu_read_lock  
 22     0.0236 vmlinux-3.9.0-0-generic account  
 15     0.0161 vmlinux-3.9.0-0-generic fsnotify
This example just scratches the surface of the capabilities of oprofile. For further reading I recommend reading the oprofile manual as it contains some excellent examples.

Friday, 26 April 2013

Firmware Test Suite New Features in Ubuntu Raring 13.04

The Firmware Test Suite (fwts) is a tool containing a large set of tests to exercise and diagnose firmware related bugs in x86 PC firmware.  So what new shiny features have appeared in the new Ubuntu Raring 13.04 release?

UEFI specific tests to exercise and stress test various UEFI run time services:
  * Stress test for miscellaneous run time service interfaces.
  * Test get/set time interfaces.
  * Test get/set wakeup time interfaces.
  * Test get variable interface.
  * Test get next variable name interface.
  * Test set variable interface.
  * Test query variable info interface. 
  * Set variable interface stress test.
  * Query variable info interface stress test.
  * Test Miscellaneous runtime service interfaces.

These use a new kernel driver to allow fwts to access the kernel UEFI run time interfaces.  The driver is built and installed using DKMS.

ACPI specific improvements:

  * Improved ACPI 5.0 support
  * Annotated ACPI _CRS (Current Resource Settings) dumping.

Kernel log scanning (finds and diagnoses errors as reported by the kernel):

  * Improved kernel log scanning with an additional 450 tests.

This release also includes many small bug fixes as well as minor improvements to the layout of the output of some of the tests.

Many thanks to Alex Hung, Ivan Hu, Keng-Yu Lin and Matt Fleming for all the improvements to fwts for this release.

Sunday, 21 April 2013

Valgrind stack traces

Sometimes when debugging an application it is useful to generate a stack dump when a specific code path is being executed.  The valgrind tool provides a very useful and easy to use mechanism to do this:

1. Add in the following to the source file:
 #include <valgrind/valgrind.h>  
2. Generate the stack trace at the point you desire (and print a specific message) using VALGRIND_PRINTF_BACKTRACE(), for example:
 VALGRIND_PRINTF_BACKTRACE("Stack trace @ %s(), %d", __func__, __LINE__);  
3. Run the program with valgrind.  You may wish to use the --tool=none option to make valgrind run a little faster:
  valgrind --tool=none ./generate/unix/bin64/acpiexec *.dat  
4. Observe the strack trace. For example, I added this to the ACPICA acpiexec in AcpiDsInitOneObject() and got stack traces such as:
 ACPI: SSDT 0x563a480 00249 (v01 LENOVO TP-SSDT2 00000200 INTL 20061109)  
 **7129** Stack trace @ AcpiDsInitOneObject(), 174  at 0x416041: VALGRIND_PRINTF_BACKTRACE (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  
 ==7129==  by 0x4160A6: AcpiDsInitOneObject (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  
 ==7129==  by 0x441F76: AcpiNsWalkNamespace (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  
 ==7129==  by 0x416312: AcpiDsInitializeObjects (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  
 ==7129==  by 0x43D84D: AcpiNsLoadTable (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  
 ==7129==  by 0x450448: AcpiTbLoadNamespace (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  
 ==7129==  by 0x4502F6: AcpiLoadTables (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  
 ==7129==  by 0x405D1A: AeInstallTables (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  
 ==7129==  by 0x4052E8: main (in /home/king/repos/acpica/generate/unix/bin64/acpiexec)  

There are a collection of very useful tricks to be found in the Valgrind online manual which I recommend perusing at your leisure.

Friday, 1 March 2013

Pragmatic Graphing

Over the past few days I have been analysing various issues and also doing some background research, so I have been collecting some rather large sets of data to process.   Normally I filter, re-format and process the data using a bunch of simple tools such as awk, tr, cut, sort, uniq and grep to get the data into some form where it can be plotted using gnuplot. 

The UNIX philosophy of piping together a bunch of tools to produce the final output normally works fine, however, graphing the data with gnuplot always ends up with me digging around in the online gnuplot documentation or reading old gnuplot files to remind myself exactly how to plot the data just the way I want.   This is fine for occasions where I gather lots of identical logs and want to compare results from multiple tests, the investment in time to automate this with gnuplot is well worth the hassle.   However, some times I just have a handful of samples and want to plot a graph and then quickly re-jig the data and perhaps calculate some statistical information such a trend lines.  In this case, I fall back to shoving the samples into LibreOffice Calc and slamming out some quick graphs.

This makes me choke a bit.  Using LibreOffice Calc starts to make me feel like I'm an accountant rather than a software engineer.  However, once I have swallowed my pride I have come to the conclusion that one has to be pragmatic and use the right tool for the job.  To turn around small amounts of data quickly, LibreOffice Calc does seem to be quite useful.   For processing huge datasets and automated graph plotting, gnuplot does the trick (as long as I can remember how to use it).   I am a command line junkie and really don't like using GUI based power tools, but there does seem to be a place where I can mix the two quite happily.