Home > Articles > Operating Systems, Server

  • Print
  • + Share This
This chapter is from the book

XenStore

Another important part of the administrative architecture for a Xen system is XenStore. Earlier in this chapter, we saw the XenStore daemon, xenstored, in the process listing of a running Xen system. In this section, we discuss XenStore in more detail.

XenStore is a database of configuration information shared between domains. Domains read and write the XenStore database to communicate with each other. This database is maintained by Domain0 on behalf of all the domains. XenStore supports atomic operations such as reading a key or writing a key. When new values are written into XenStore, the affected domains are notified.

XenStore is often used as a mechanism for controlling devices in guest domains. XenStore can be accessed in a number of different ways, such as a UNIX socket in Domain0, a kernel-level API, or an ioctl interface. Device drivers write request or completion information into XenStore. Although drivers can write anything they want, XenStore is designed for small pieces of information, such as configuration information or status, and not for large pieces of information, such as in bulk data transfer.

XenStore is actually located in a single file, /var/lib/xenstored/tdb, on Domain0. (tdb stands for Tree Database.) Listing 3.7 shows the tdb file located in the /var/lib/xenstored directory on Domain0 and some attributes of this file.

Listing 3.7. Contents of /var/lib/xenstored/

[root@dom0]:/var/lib/xenstored# ls -lah
total 112K
drwxr-xr-x  2 root root 4.0K Feb 24 13:38 .
drwxr-xr-x 25 root root 4.0K Oct 16 11:40 ..
-rw-r-----  1 root root  40K Feb 24 13:38 tdb
root@dom0]:/var/lib/xenstored#
[root@dom0]:/var/lib/xenstored# file tdb
tdb: TDB database version 6, little-endian hash size 7919 bytes
root@dom0]:/var/lib/xenstored#

Viewed another way, XenStore is similar in flavor to the /proc or sysfs virtual file system in Linux and some UNIX variants. Internal to the XenStore database file is a logical file system tree with three main paths: /vm, /local/domain, and /tool. The /vm and /local/domain directory paths have subareas dedicated to individual domains, while the /tool path stores general information about various tools and is not indexed by domain. The second directory level in /local/domain might seem somewhat unnecessary given that /local contains only the domain subdirectory.

Each domain has two identification numbers. The universal unique identifier (UUID) is an identifying number that remains the same even if the guest is migrated to another machine. The domain identifier (DOMID) is an identifying number that refers to a particular running instance. The DOMID typically changes when the guest is migrated to another machine.

The /vm path is indexed by the UUID of each domain and stores configuration information such as the number of virtual CPUs and the amount of memory allocated to the domain. For each domain, there is a /vm/<uuid> directory. Table 3.5 explains the contents of the /vm/<uuid> directory.

Table 3.5. Contents of the /vm/<uuid> Directory

Entry

Description

uuid

UUID of the domain. The UUID of a domain does not change during migration, but the domainId does. Because the /vm directory is indexed by UUID, this entry is somewhat redundant.

ssidref

SSID reference for the domain.

on_reboot

Specifies whether to destroy or restart the domain in response to a domain reboot request.

on_poweroff

Specifies whether to destroy or restart the domain in response to a domain halt request.

on_crash

Specifies whether to destroy or restart the domain in response to a domain crash.

vcpus

Number of virtual CPUs allocated to this domain.

vcpu_avail

Number of active virtual CPUs for this domain.

Note: Number of disabled virtual CPUs is given by vcpus minus vcpu_avail.

memory

Amount of memory in megabytes allocated to the domain.

name

Name of the domain.

Regular guest domains (DomUs) also have a /vm/<uuid>/image directory. Table 3.6 explains the contents of the /vm/<uuid>/image directory.

Table 3.6. Contents of the /vm/<uuid>/image Directory

Entry

Description

ostype

linux or vmx to identify the builder type

kernel

Filename path on Domain0 to the kernel for this domain

cmdline

Command line to pass to the kernel for this domain when booting

ramdisk

Filename path on Domain0 to the ramdisk for this domain

The /local/domain is indexed by the DOMID and contains information about the running domain such as the current CPU the domain is pinned to or the current tty on which the console data from this domain is exposed. For each domain, there is a /local/domain/<domId> directory. Note that the UUID used when indexing in the /vm directory is not the same as the DOMID used when indexing in the /local/domain directory. The UUID does not change during migration, but the DOMID does. This enables localhost-to-localhost migration. Some of the information in /local/domain is also found in /vm, but /local/domain contains significantly more information, and the version in /vm does not change. The /local/domain directory for a domain also contains a pointer to the /vm directory for the same domain. Table 3.7 explains the contents of the /local/domain/<domId> directory.

Table 3.7. Contents of the /local/domain/<domId> Directory

Entry

Description

domId

Domain identifier of the domain. The domain ID changes during migration, but the UUID does not. Because the /local/domain directory is indexed by domId, this entry is somewhat redundant.

/vm Related Entries

on_reboot

Refer to Table 3.5.

on_poweroff

Refer to Table 3.5.

on_crash

Refer to Table 3.5.

name

Refer to Table 3.5.

vm

Pathname of the VM directory for this same domain.

Scheduling Related Entries

running

If present, indicates that the domain is currently running.

cpu

Current CPU to which this domain is pinned.

cpu_weight

The weight assigned to this domain for scheduling purposes. Domains with higher weights use the physical CPUs more often.

xend Related Entries

cpu_time

You might think this would be related to the cpu and cpu_weight entries, but it actually refers to the xend start time. Used for Domain0 only.

handle

Private handle for xend.

image

Private xend information.

Under /local/domain/<domId> for each domain, there are also several subdirectories including memory, console, and store. Table 3.8 describes the contents of these subdirectories.

Table 3.8. Subdirectories of the /local/domain/<domId> Directory

Entry

Description

/local/domain/<domId>/console

ring-ref

Grant table entry of the console ring queue.

port

Event channel used for the console ring queue.

tty

tty on which the console data is currently being exposed.

limit

Limit in bytes of console data to be buffered.

/local/domain/<domId>/store

ring-ref

Grant table entry of the store ring queue.

port

Event channel used for the store ring queue.

/local/domain/<domId>/memory

target

Target memory size in kilobytes for this domain.

Three additional subdirectories under /local/domain/<domId> are related to device management—backend, device, and device-misc. These all have subdirectories of their own.

Earlier in this chapter, we discussed how device management in Xen is divided into backend drivers running in privileged domains with direct access to the physical hardware and frontend drivers that give unprivileged domains the illusion of a generic and dedicated version of that resource. The backend subdirectory provides information about all physical devices managed by this domain and exported to other domains. The device subdirectory provides information about all frontend devices used by this domain. Finally, the device-misc directory provides information for other devices. In each case, there can be a vif and vbd subdirectory. Virtual Interface (vif) devices are for network interfaces, and Virtual Block Devices (vbd) are for block devices such as disks or CD-ROMs. In Chapter 9, "Device Virtualization and Management," we discuss in more detail how XenStore enables communication between backend and frontend drivers.

Table 3.9 describes a number of tools that can be used to explore and manipulate XenStore. They are often located in /usr/bin. These commands allow the XenStore database, which is stored as a file (/var/lib/xenstored/tdb), to be viewed as a logical file system.

Table 3.9. XenStore Commands

Command

Description

xenstore-read <Path to
XenStore Entry>

Displays the value of a XenStore entry

xenstore-exists <XenStore Path>

Reports whether a particular XenStore Path exists

xenstore-list <XenStore Path>
xenstore-ls <XenStore Path>

Shows all the children entries or directories of a specific XenStore path

xenstore-write <Path to
XenStore Entry> <value>

Updates the value of a XenStore entry

xenstore-rm <XenStore Path>

Removes XenStore entry or directory

xenstore-chmod <XenStore
Path> <mode.

Updates the permission on a XenStore entry to allow read or write

xenstore-control

Sends commands to xenstored, such as check to trigger an integrity check.

xsls <Xenstore path>

Recursively shows the contents of a specified XenStore path; equivalent of a recursive xenstore-list plus a xenstore-read to display the values

For example, Listing 3.8 shows an example of a xenstore-list on /local/domain/0.

Listing 3.8. xenstore-list

[user@dom0]#xenstore-list /local/domain/0
cpu
memory
name
console
vm
domid
backend
[user@dom0]

Listing 3.9 shows a shell script from the XenWiki used to dump the entire contents of XenStore. Notice the calls to xenstore-list and xenstore-read. This would be similar to an xsls on the XenStore root.

Listing 3.9. Shell Script for Dumping the Contents of XenStore from http://wiki.xensource.com/xenwiki/XenStore

#!/bin/sh

function dumpkey() {
   local param=${1}
   local key
   local result
   result=$(xenstore-list ${param})
   if [ "${result}" != "" ] ; then
      for key in ${result} ; do dumpkey ${param}/${key} ; done
     else
      echo -n ${param}'='
      xenstore-read ${param}
   fi
}

for key in /vm /local/domain /tool ; do dumpkey ${key} ; done
  • + Share This
  • 🔖 Save To Your Account