Home > Articles > Operating Systems, Server > Solaris

  • Print
  • + Share This
Like this article? We recommend

MASS Pilot Deployment and Tests

MASS has been deployed as a pilot project with mission critical applications running on it. In addition to this, practical tests have been carried out in the Sun facilities at Watchmoor 2, in London, UK. The rest of this article contains the report from these tests.

Watchmoor Facilities Tests

It is possible to use a broadband network as a carrier for the Sun Ray protocol. The tests described in this article were carried out by the Sun Professional Services Network Team in cooperation with the Sun geographic sales office (see TABLE 1).

Testing clearly demonstrates that a Sun Ray server, directly attached to a broadband network, with the use of broadband-to-Ethernet proxy switch technology (media gateway), provides several beneficial features, such as scalability, high availability, redundancy, and long distance coverage. The tests described in this article are, however, aimed at exploring the functionality and behavior of a Sun Ray environment carried over a combination of broadband and Ethernet technologies. The tests do not cover scalability, general performance, or performance under load.


Due to a concern for slow connections, networks for Sun Ray appliances are very strictly defined. Deployment of Sun Ray appliances is limited to small office environments (30–50 Sun Ray appliances per server). This stipulation has, however, not hindered speculation about using Sun Ray appliances as a maintenance-free alternative to PCs for the average person in urban areas where MANs or CityNets already exist. To verify both the limitations and the speculation, the first series of tests were carried out 18 months ago at an actual customer site in Sweden. This customer site currently uses broadband technology as their backbone.

Sun Ray appliances were deployed in different campuses with one Sun Ray server (a Sun Enterprise_ 250 server) placed in a computer center on one of the campuses. Cable lengths between the campuses were up to 15 km. The Sun Ray appliances were attached to Ethernet/broadband proxy switches over 10 Mbps and 100 Mbps networks. To this day, the customer runs this configuration in production and is very pleased, even though the configuration is not formally supported.

To demonstrate the technology in a controlled environment, the two person team (Lars Persson and Jane Lundstrom) who performed the work on Sun Ray appliances over broadband networks in Sweden, were asked to set up a miniature replica of the customer's Sun Ray environment.


The broadband equipment used in the tests was manufactured by Marconi. The computers and Sun Ray appliances were made by Sun Microsystems.

  • One ES3810 proxy switch with dual OC3 uplinks, two 100 Mbps Ethernet connectors, and 12+24 10 Mbps Ethernet connectors

  • Two ASX200 BX backbone switches with 3X4 OC3 and 1X1 OC12 per switch. These ports are both UTP and SC fiber, and the fiber ports are both single-mode and multimode.

  • One Ultra Enterprise_ 10 workstation functioning as a Sun Ray server with a dual OC3 BIC

  • Two Ultra 1 workstations functioning as a web server and a video/audio TRADER with a single OC3 BIC

  • One AVA-300 video/audio digitizer

  • Two Sun Ray appliances

  • FIGURE 4 shows how the equipment was set up.

    Figure 4FIGURE 4 Physical View of Sun Ray Replica Environment

    Broadband Technology Used and Why

    Based on the experience of the test team and hardware availability, LANE 2.0 over-broadband ISDN was chosen. Broadband ISDN runs on top of both SONET and SDH. In these tests, SONET was chosen. There is no reason why Sun Ray appliances should not run over raw SONET or SDH (PPP), but currently this technology is limited to broadband networks using Ethernet proxy switches.

    TABLE 1 Watchmoor Facilities Broadband Tests

    Test No.




    Connectivity test—Load a Sun Ray appliance through a proxy switch over one broadband switch to a broadband-attached Sun Ray server with one BIC.

    A Sun Ray appliance boots over 10 Mbps and 100 Mbps networks from a broadband-attached Sun Ray server. It is possible to switch between 100Mbps and 10Mbps without having the Sun Ray appliance lose connection to the Sun Ray server. The session is maintained even though the Sun Ray appliance is moved to another port on the proxy switch, both on the same card, and to a different card. It is also possible to move between 10 Mbps and 100 Mbps and get a renegotiation of interface speed. As a result, the Sun Ray server can be moved to different ports on the broadband switch without loss of connectivity. On the Sun Ray appliance, the session froze for up to 15 seconds with each move, but the session was never lost.


    Same as Test 1, but use dual uplinks on the proxy switch and test failover.

    Unplugging one interface caused the switch to fail over to the next BIC. Reconnecting the interface and unplugging the other one made the server fail over to the first BIC again. On the Sun Ray appliance, the session froze for 10 to 15 seconds, but the session was never lost.


    Same as Test 2, but over two broadband switches with the server connected to one broadband switch, and the proxy to the other broadband switch (one interlink hop).

    Same results as observed in Test 1.


    Same as Test 3, but using one proxy uplink on one broadband switch, and the other proxy uplink to the second broadband switch.

    Same results as observed in Test 2 for failover.


    Same as Test 4, but using dual BICs on the server, each connected to its own broadband switch. Tested for the ability for the Sun Ray session to survive the failure of one of the switches (power cycling), and when the broadband switch returns, it should survive power cycling of the other broadband switch.

    Failure of an entire broadband switch does not affect the Sun Ray appliance session, except for the time delay described previously. Failure of the second switch, when the first switch was restored, yielded exactly the same results.


    Server attached to one broadband switch, proxy switch attached to the other broadband switch, and the inter-broadband switch link consisting of a 35 km single mode fiber.

    Because of the hardware limitations of the broadband switches, the entire test was not possible to fulfill. It was, however, possible to implement a work around by using one quad board on one of the broadband switches and forcing the traffic on one port and back in on the other, thus enabling us to boot and run a Sun Ray appliance at a distance of 35 km from the server. There were no observable delays in the Sun Ray appliance response or general behavior.


    Introduce digitized PAL video from a camcorder (or similar) to an audio/video unit (AVA) handled by an OC3 attached TRADER and display on a Sun Ray appliance through a 10 Mbps and 100 Mbps Ethernet network.

    It is possible to transmit PAL or NTSC video streams through the broadband network and out to the Sun Ray appliance on both 10 Mbps and 100 Mbps. The image quality is good, but to get a good streaming ability, the video transmit streams must be fine-tuned. Because performance is not within the scope for these tests, such tuning was not done. The video stream is currently good enough for less sophisticated video conferences. Currently, we do not know the quality of streaming video we can achieve. No audio streams were tested due to lack of audio equipment, but the AVA handles video and audio steams in a similar way. Such a test would not, at this stage, have yielded anything more than the one performed.


    Establish a method of inter-connectivity between the test environment and service delivery network (previously known as Architecture.COM).

    Inter-connectivity to the service delivery network was accomplished through a per-port VLAN on a proxy switch. The VLAN connects to a service delivery network through a 100 Mbps Ethernet network. Inter-connectivity was tested using a minimal service delivery network with a web server. It was possible to connect to the web server from the Sun Ray appliance through HTTP.


    Two ELAN/VLAN configurations were established, one called security and another called london.

    The security configuration was the ELAN/VLAN configuration for the private Sun Ray network and london was the ELAN/VLAN configuration for the public network. In addition to this, a third ELAN/VLAN configuration called mgmt was configured and used for out-of-band management purposes. All ELANS were set up as anycast services on both switches.

    The global topology file for the broadband environment was set up as a Distributed LAN Emulation (DLE) set on both switches, as is shown in the following code example.

    CODE EXAMPLE 1 Global Topology File for Broadband Environment

    # LECS.CFG
    # Date: 12/24/00 19:57
    # Revision date: 2001-03-03 
    # TFTP-host(s): Sun Ray01-m
    # User: Sun Microsystems
    # Revisor: Lars Persson, Sun PS
    # LECS in asx21, asx41
    # The search ordering of elan names
    Match.Ordering: london, security, \
    # Parameters for elans
    .Multicast_Send_VCC_Type:    Best Effort
    .Maximum_Unknown_Frame_Time:   1
    .LAN_Type:    Ethernet/IEEE 802.3
    .Maximum_Unknown_Frame_Count:  1
    .VCC_TimeOut_Period:   1200
    .Forward_Delay_Time:   15
    .Maximum_Frame_Size:   1516
    .Expected_LE_ARP_Response_Time: 1
    .Path_Switching_Delay:  6
    .Aging_Time:   300
    .Control_TimeOut:    120
    .Connection_Complete_Timer:   4
    .Flush_TimeOut: 4
    .Maximum_Retry_Count:  1
    # Parameters for DLE elan: london
    # LES/BUS on asx21, asx41
    london.Address:         c5100000aaaa0000aaaa0000aaaa0000aaaa0192
    # Parameters for DLE elan: mgmt
    # LES/BUS on asx21, asx41
    mgmt.Address:          c5500000aaaa0000aaaa0000aaaa0000aaaa019a
    # Parameters for DLE elan: security
    # LES/BUS on asx21, asx41
    security.Address:        c5600000aaaa0000aaaa0000aaaa0000aaaa019c
    # entries that the VLAN Manager does not parse at this time
    LECS.Reload_Period: 30 
    All equipment used was set up with an ELAN instance in the mgmt net so it could be
    reached through the broadband network. The IP plan on the management network
    was as follows:
    Network> / Netmask> / Broadcast>
    es3810 (proxy switch)    10.1.101
    Sun Ray server
    On the Sun Ray server networks were configured as follows:
    lo0: flags=1000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4> mtu 8232 index 1
        inet netmask ff000000 
    fa0: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 9188 index 5
        inet netmask 0 
        ether 0:20:48:2e:1f:c6
    fa1: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 9188 index 6
        inet netmask 0 
        ether 0:20:48:2e:33:e 
    el1: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 7
        inet netmask fffffe00 broadcast
        ether 0:20:48:2e:1f:c6 
    el0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 8
        inet netmask ffffff00 broadcast
        ether 2:20:48:2e:1f:c6 
    el2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 9
        inet netmask ffffff00 broadcast
        ether 6:20:48:2e:1f:c6

    In this example, security is represented by el0, london by el2 and mgmt by el1. The interfaces fa0 and fa1 are the actual BICs. The el interfaces emulate Ethernet. The proxy switch has two ELAN/VLAN combinations (london and security) and one ELAN (mgmt).

    The security VLAN was used to connect the Sun Ray appliances to and london provides Ethernet access to the public network. The mgmt ELAN instance was for administrative purposes. The london and security have both 10 Mbps and 100 Mbps Ethernet interfaces. Finally, the ASX en has one instance in each the mgmt ELAN for administrative purposes. The AVA resides on the london network and so does the AVA TRADER.


    With limitations in the video broadcasting test and some reservations in accordance to the distance test, it can be seen clearly that Sun Ray appliances do work in a broadband environment, and that the infrastructure itself offers several beneficial features normally provided by computers—load sharing, robust and persistent session handling, failover and so on. With long distance single mode fibers it is possible to have a network radius of at least 120 km (one interlink hop).

  • + Share This
  • 🔖 Save To Your Account