Home > Articles > Programming > Ajax

  • Print
  • + Share This
This chapter is from the book

This chapter is from the book

Testing Applications

Having the capability to build Ajax applications with Java gives you many tools that let you maintain larger applications with less work. One very important aspect of maintaining a large application is being able to easily create unit tests for most, if not all, functionality. This need comes from a common problem with software development: the code size grows to a point where small changes can have cascading effects that create bugs.

It has become common practice to incorporate heavy testing into the development cycle. In the traditional waterfall development cycle you would write code to a specification until the specification was complete. Then the application would be passed to testers who would look for bugs. Developers would respond to bug reports by fixing the bugs. Once all the bugs were fixed, the product would be shipped. Figure 4-39 illustrates the steps in traditional software development testing.

Figure 4-39

Figure 4-39 Old-style testing = bad

The problem encountered with this type of development cycle is that during the bug finding and fixing phase, code changes can easily cause more bugs. To fix this problem, testers would need to start testing right from the beginning after every code change to ensure new bugs weren't created and old bugs didn't reappear.

One successful testing methodology has developers write automated unit tests before they write the features. The tests cover every use case of the new feature to be added. The first time the test is run, it will fail for each case. The development process then continues until each test case in the unit test is successful. Then the unit test becomes part of a test suite for the application and is run before committing any source code changes to the source tree. If a new feature causes any part of the application to break, other tests in the automated test suite will identify this problem, since every feature of the application has had tests built. If a bug is found at this point, it is relatively easy to pinpoint the source since only one new feature was added. Finding and fixing bugs early in the development lifecycle like this is much easier and quicker than finding and fixing them at the end. The test suite grows with the application. The initial investment in time to produce the unit tests pays off over the long run since they are run again on every code change, ensuring each feature's health. Figure 4-40 illustrates this process.

Figure 4-40

Figure 4-40 Test-first testing = good

In practice, when comparing this approach to the one illustrated in Figure 4-39, there is a large time saving from finding bugs earlier and less of a need for a large testing team since the developer is responsible for much of the testing.

This technique is relatively novel for client-side web applications. Testing is reduced to usability testing and making sure that different browsers render pages properly with traditional web applications. This is one of the great things about HTML. It's a declarative language that leaves little room for logical bugs. It's easy to deploy HTML web pages that work (browser-rendering quirks aside). However, using JavaScript introduces the possibility of logic bugs. This wasn't too much of a problem when JavaScript was being used lightly, but for Ajax applications heavily using JavaScript, logical bugs are somewhat of a problem. Since JavaScript is not typed and does not have a compile step, many bugs can only be found by running the application, which makes the creation of unit tests difficult. Furthermore, it is difficult to test an entire application through its interface. Many simple bugs, such as trying to call an undefined function, cannot be caught without running the program and trying to execute the code that has the bug, but by using Java you could catch these bugs immediately in the IDE or at compile time. From a testing perspective, it does not make sense to build large Ajax applications with JavaScript.

Using JUnit

JUnit is another great Java tool that assists in creating an automated testing for your application. It provides classes that assist in building and organizing tests, such as assertions to test expected results, a test-case base class that allows you to set up several tests, and a mechanism to join tests together in a test suite. To create a test case for JUnit you would typically extend the TestCase class, but since GWT applications require a special environment, GWT provides a GWTTestCase class for you to extend.

Let's walk through the creation of a test case for the Multi-Search application in Chapter 7. The first step is to use the GWT junitCreator script to generate a test case class and some scripts that can launch the test case. The junitCreator script takes several arguments to run. Table 4-1 outlines each argument.

Table 4-1. junitCreator Script Arguments

Argument

Description

Example

junit

Lets you define the location of the junit jar file. You can find a copy in the plugin directory of your Eclipse installation.

-junit E:\code\eclipse\plugins\org.junit_3.8.1\junit.jar

module

Specifies the GWT module that you'll be testing. It is required since the environment needs to run this module for your test.

-module com.gwtapps.multisearch.MultiSearch

eclipse

Specifies your Eclipse project name if you want to generate Eclipse launch configurations.

-eclipse GWTApps

The last argument should be the class name for the test case. You would typically use the same package as the one being tested.

com.gwtapps.multisearch.client.MultiSearchTest

To run this script for the Multi-Search application we can use the following command:

junitCreator -junit E:\code\eclipse\plugins\org.junit_3.8.1\junit.jar -module
com.gwtapps.multisearch.MultiSearch -eclipse GWTApps
com.gwtapps.multisearch.client.MultiSearchTest

Figure 4-41 shows the output from this command. The script created two scripts, two launch configurations for launching the test in web mode or hosted mode, and one test case class that is stored in the test directory. In Eclipse the test case class will look like Figure 4-42.

Figure 4-41

Figure 4-41 Using junitCreator to generate a test case

Figure 4-42

Figure 4-42 A generated test case in Eclipse

The generated test case has two methods. The first, getModuleName, is required by GWT and must specify the module that is being tested. The junitCreator script has set this value to the Multi-Search module because it was specified with the module command line argument. The second method, a test case, is implemented as a simple test that just asserts that the value true is true. You can build as many test cases as you like in this one class.

You can run the tests by running the scripts generated by junitCreator. Alternatively, you can launch JUnit inside Eclipse for a visual representation of the results. Running inside Eclipse also lets you debug the JUnit test case, which can greatly assist in finding bugs when a test case fails. Since junitCreator created a launch configuration for Eclipse, we can simply click the Run or Debug icons in the Eclipse toolbar and select the Multi SearchTest launch configuration from the drop-down menu. After launching this configuration, the JUnit view automatically displays in Eclipse. When the test has completed, you will see the results in the JUnit view, as shown in Figure 4-43. Notice the familiar check marks, which are displayed in green in Eclipse, next to the test case indicating that the test case was successful.

Figure 4-43

Figure 4-43 Running a JUnit test case from Eclipse

Now let's create a test case for each type of search engine that the application uses. Adding the following code to the test class creates four new tests:

protected MultiSearchView getView(){
   MultiSearchView view = new MultiSearchView( new MultiSearchViewListener(){
      public void onSearch( String query ){}
   });
   RootPanel.get().add( view );
   return view;
}

protected void doSearchTest( Searcher searcher ){
   searcher.query( "gwt" );
}

public void testYahoo() {
   doSearchTest( new YahooSearcher( view ) );
}

public void testFlickr() {
   doSearchTest( new FlickrSearcher( view ) );
}

public void testAmazon() {
   doSearchTest( new AmazonSearcher( view ) );
}

public void testGoogleBase() {
   doSearchTest( new GoogleBaseSearcher( view ) );
}

The first two methods, getView and doSearchTest, are helper methods for each test in this test case. The getView method simply creates a view, the MultiSearchView defined in the application, and adds it to the RootPanel so that it is attached to the document. Then the doSearchTest method sends a query to a Searcher class implementation. Each test case instantiates a different Searcher implementation and sends it to the doSearchTest method. When JUnit runs, each test case runs and submits a query to the respective search engine. Figure 4-44 shows what the result looks like in the Eclipse JUnit view.

Figure 4-44

Figure 4-44 Running several tests in one test case

If any search failed by an exception being thrown, then the stack trace for the exception would display in the right pane of this view and a red X icon would display over the test case.

The problem with this test case is that it doesn't verify the results. JUnit provides many assertion helper methods that compare actual results to expected results. However, in this case our results are asynchronous; that is, they don't arrive until after the test case completes. GWT provides help with this since much of Ajax development is asynchronous with the delayTestFinish method.

To use this method we need to have a way of validating an asynchronous request. When we have validated that an asynchronous request is complete, then we call the finishTest method. In the case of the MultiSearch test, we will validate when we receive one search result. To do this we need to hook into the application to intercept the asynchronous event. This requires a bit of knowledge about the application and may seem a little obscure otherwise. We will create a mock object, which is an object that pretends to be another object in the application, to simulate the SearchResultsView class. By simulating this class we will be able to extend it and override the method that receives search results. The class can be declared as an inner class on the test case like this:

private class MockSearchResultsView extends SearchResultsView {
   public MockSearchResultsView( SearchEngine engine ){
      super(engine);
   }

   public void clearResults(){}

   public void addSearchResult( SearchEngineResult result ){
      assertNotNull(result);
      finishTest();
   }
}

The class overrides the addSearchResult method, which one of the Searcher classes calls when a search result has been received from the server. Instead of adding the result to the view, this test case will use one of JUnit's assert methods, assertNotNull, to assert that the search engine result object is not null. Then it calls the GWT's finishTest method to indicate that the asynchronous test is complete.

To run this test we need to change the doSearchTest method on the test case to insert the mock view and tell JUnit to wait for an asynchronous response:

protected void doSearchTest( Searcher searcher ){
   searcher.setView(
      new MockSearchResultsView(searcher.getView().getEngine()));
   searcher.query( "gwt" );
   delayTestFinish(5000);
}

In this code we set the view of the searcher to the mock view that we've created, and then call the delayTestFinish method with a value of 5,000 milliseconds (5 seconds). If the test does not complete within 5 seconds, it will fail. If the network connection is slow, you may want to consider a longer value here to properly test for errors.

Running these tests at this point tests the application code in the proper GWT environment and with asynchronous events occurring. You should use these testing methods as you build your application so you have a solid regression testing library.

Benchmarking

When using GWT to create Ajax applications, taking user experience into consideration almost always comes first. Part of creating a good user experience with an application is making it perform well. Fortunately, since GWT has a compile step, each new GWT version can create faster code, an advantage that you don't have with regular JavaScript development. However, you probably shouldn't always rely on the GWT team to improve performance and should aim at improving your code to perform better. Starting with release 1.4, GWT includes a benchmarking subsystem that assists in making smart performance-based decisions when developing Ajax applications.

The benchmark subsystem works with JUnit. You can benchmark code through JUnit by using GWT's Benchmark test case class instead of GWTTestCase. Using this class causes the benchmarking subsystem to kick in and measure the length of each test. After the tests have completed, the benchmark system writes the results to disk as an XML file. You can open the XML file to read the results, but you can view them easier in the benchmarkViewer application that comes with GWT.

Let's look at a simple example of benchmarking. We can create a benchmark test case by using the junitCreator script in the same way we would for a regular test case:

junitCreator -junit E:\code\eclipse\plugins\org.junit_3.8.1\junit.jar -module
com.gwtapps.desktop.Desktop -eclipse GWTApps com.gwtapps.desktop.client.
CookieStorageTest

In this code we're creating a test case for the cookie storage feature in Chapter 6's Gadget Desktop application. The application uses the Cookie Storage class to easily save large cookies while taking into account browser cookie limits. In this test we're going to measure the cookie performance. First, we extend the Benchmark class instead of GWTTestCase:

public class CookieStorageTest extends Benchmark {

   public String getModuleName() {
      return "com.gwtapps.desktop.Desktop";
   }

   public void testSimpleString(){
      try {
         CookieStorage storage = new CookieStorage();
         storage.setValue("test", "this is a test string");
         assertEquals( storage.getValue("test"), "this is a test string" );
         storage.save();
         storage.load();
         assertEquals( storage.getValue("test"), "this is a test string");

      } catch (StorageException e) { fail(); }
   }
}

You can run this benchmark from the Eclipse JUnit integration or the launch configuration generated by the junitCreator script. The test simply creates a cookie, saves it, loads it, and then verifies that it hasn't changed. The generated XML file will contain a measurement of the time it took to run this method. At this point the benchmark is not very interesting. We can add more complex benchmarking by testing with ranges.

Using ranges in the benchmark subsystem gives you the capability to run a single test case multiple times with different parameter values. Each run will have its duration measured, which you can later compare in the benchmark report. The following code adds a range to the cookie test to test writing an increasing number of cookies:

public class CookieStorageTest extends Benchmark {

   final IntRange smallStringRange =
      new IntRange(1, 64, Operator.MULTIPLY, 2);

   public String getModuleName() {
      return "com.gwtapps.desktop.Desktop";
   }
   /**
   * @gwt.benchmark.param cookies -limit = smallStringRange
   */
   public void testSimpleString( Integer cookies ){
      try {
         CookieStorage storage = new CookieStorage();
         for( int i=0; i< cookies.intValue(); i++){
            storage.setValue("test"+i, "this is a test string"+i);
            assertEquals( storage.getValue("test"+i),
               "this is a test string"+i );
         }
         storage.save();
         storage.load();
         for( int i=0; i< cookies.intValue(); i++){
            assertEquals( storage.getValue("test"+i),
               "this is a test string"+i );
         }
      } catch (StorageException e) { fail(); }
   }
   public void testSimpleString(){
   }
}

This code creates an IntRange. The parameters in the IntRange constructor create a range that starts at one and doubles until it reaches the value 64 (1, 2, 4, 8, 16, 32, 64). GWT passes each value in the range into separate runs of the testSimpleString method. GWT knows to do this by the annotation before the method, which identifies the parameter and the range to apply.

Notice that there is also a version of the testSimpleString method without any parameters. You need to provide a version of this method with no arguments to run in JUnit since it does not support tests without parameters. The benchmark subsystem is aware of this and is able to choose the correct method.

After running this code we can launch the benchmarkViewer application from the command line in the directory that the reports were generated in (this defaults to the Projects directory):

benchmarkViewer

The benchmarkViewer application shows a list of reports that are in the current directory. You can load a report by clicking on it in the list. Each report contains the source code for each test along with the results as a table and a graph. Figure 4-45 shows the result of the testSimpleString test.

Figure 4-45

Figure 4-45 Benchmark results for the cookie test

The benchmark system also recognizes beginning and ending methods. Using methods like these allows you to separate set up and take down code for each test that you don't want measured. For example, to define a setup method for the testSimpleString test, you would write the following code:

public void beginSimpleString( Integer cookies ){
   /* do some initialization */
}
  • + Share This
  • 🔖 Save To Your Account

InformIT Promotional Mailings & Special Offers

I would like to receive exclusive offers and hear about products from InformIT and its family of brands. I can unsubscribe at any time.

Overview


Pearson Education, Inc., 221 River Street, Hoboken, New Jersey 07030, (Pearson) presents this site to provide information about products and services that can be purchased through this site.

This privacy notice provides an overview of our commitment to privacy and describes how we collect, protect, use and share personal information collected through this site. Please note that other Pearson websites and online products and services have their own separate privacy policies.

Collection and Use of Information


To conduct business and deliver products and services, Pearson collects and uses personal information in several ways in connection with this site, including:

Questions and Inquiries

For inquiries and questions, we collect the inquiry or question, together with name, contact details (email address, phone number and mailing address) and any other additional information voluntarily submitted to us through a Contact Us form or an email. We use this information to address the inquiry and respond to the question.

Online Store

For orders and purchases placed through our online store on this site, we collect order details, name, institution name and address (if applicable), email address, phone number, shipping and billing addresses, credit/debit card information, shipping options and any instructions. We use this information to complete transactions, fulfill orders, communicate with individuals placing orders or visiting the online store, and for related purposes.

Surveys

Pearson may offer opportunities to provide feedback or participate in surveys, including surveys evaluating Pearson products, services or sites. Participation is voluntary. Pearson collects information requested in the survey questions and uses the information to evaluate, support, maintain and improve products, services or sites, develop new products and services, conduct educational research and for other purposes specified in the survey.

Contests and Drawings

Occasionally, we may sponsor a contest or drawing. Participation is optional. Pearson collects name, contact information and other information specified on the entry form for the contest or drawing to conduct the contest or drawing. Pearson may collect additional personal information from the winners of a contest or drawing in order to award the prize and for tax reporting purposes, as required by law.

Newsletters

If you have elected to receive email newsletters or promotional mailings and special offers but want to unsubscribe, simply email information@informit.com.

Service Announcements

On rare occasions it is necessary to send out a strictly service related announcement. For instance, if our service is temporarily suspended for maintenance we might send users an email. Generally, users may not opt-out of these communications, though they can deactivate their account information. However, these communications are not promotional in nature.

Customer Service

We communicate with users on a regular basis to provide requested services and in regard to issues relating to their account we reply via email or phone in accordance with the users' wishes when a user submits their information through our Contact Us form.

Other Collection and Use of Information


Application and System Logs

Pearson automatically collects log data to help ensure the delivery, availability and security of this site. Log data may include technical information about how a user or visitor connected to this site, such as browser type, type of computer/device, operating system, internet service provider and IP address. We use this information for support purposes and to monitor the health of the site, identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents and appropriately scale computing resources.

Web Analytics

Pearson may use third party web trend analytical services, including Google Analytics, to collect visitor information, such as IP addresses, browser types, referring pages, pages visited and time spent on a particular site. While these analytical services collect and report information on an anonymous basis, they may use cookies to gather web trend information. The information gathered may enable Pearson (but not the third party web trend services) to link information with application and system log data. Pearson uses this information for system administration and to identify problems, improve service, detect unauthorized access and fraudulent activity, prevent and respond to security incidents, appropriately scale computing resources and otherwise support and deliver this site and its services.

Cookies and Related Technologies

This site uses cookies and similar technologies to personalize content, measure traffic patterns, control security, track use and access of information on this site, and provide interest-based messages and advertising. Users can manage and block the use of cookies through their browser. Disabling or blocking certain cookies may limit the functionality of this site.

Do Not Track

This site currently does not respond to Do Not Track signals.

Security


Pearson uses appropriate physical, administrative and technical security measures to protect personal information from unauthorized access, use and disclosure.

Children


This site is not directed to children under the age of 13.

Marketing


Pearson may send or direct marketing communications to users, provided that

  • Pearson will not use personal information collected or processed as a K-12 school service provider for the purpose of directed or targeted advertising.
  • Such marketing is consistent with applicable law and Pearson's legal obligations.
  • Pearson will not knowingly direct or send marketing communications to an individual who has expressed a preference not to receive marketing.
  • Where required by applicable law, express or implied consent to marketing exists and has not been withdrawn.

Pearson may provide personal information to a third party service provider on a restricted basis to provide marketing solely on behalf of Pearson or an affiliate or customer for whom Pearson is a service provider. Marketing preferences may be changed at any time.

Correcting/Updating Personal Information


If a user's personally identifiable information changes (such as your postal address or email address), we provide a way to correct or update that user's personal data provided to us. This can be done on the Account page. If a user no longer desires our service and desires to delete his or her account, please contact us at customer-service@informit.com and we will process the deletion of a user's account.

Choice/Opt-out


Users can always make an informed choice as to whether they should proceed with certain services offered by InformIT. If you choose to remove yourself from our mailing list(s) simply visit the following page and uncheck any communication you no longer want to receive: www.informit.com/u.aspx.

Sale of Personal Information


Pearson does not rent or sell personal information in exchange for any payment of money.

While Pearson does not sell personal information, as defined in Nevada law, Nevada residents may email a request for no sale of their personal information to NevadaDesignatedRequest@pearson.com.

Supplemental Privacy Statement for California Residents


California residents should read our Supplemental privacy statement for California residents in conjunction with this Privacy Notice. The Supplemental privacy statement for California residents explains Pearson's commitment to comply with California law and applies to personal information of California residents collected in connection with this site and the Services.

Sharing and Disclosure


Pearson may disclose personal information, as follows:

  • As required by law.
  • With the consent of the individual (or their parent, if the individual is a minor)
  • In response to a subpoena, court order or legal process, to the extent permitted or required by law
  • To protect the security and safety of individuals, data, assets and systems, consistent with applicable law
  • In connection the sale, joint venture or other transfer of some or all of its company or assets, subject to the provisions of this Privacy Notice
  • To investigate or address actual or suspected fraud or other illegal activities
  • To exercise its legal rights, including enforcement of the Terms of Use for this site or another contract
  • To affiliated Pearson companies and other companies and organizations who perform work for Pearson and are obligated to protect the privacy of personal information consistent with this Privacy Notice
  • To a school, organization, company or government agency, where Pearson collects or processes the personal information in a school setting or on behalf of such organization, company or government agency.

Links


This web site contains links to other sites. Please be aware that we are not responsible for the privacy practices of such other sites. We encourage our users to be aware when they leave our site and to read the privacy statements of each and every web site that collects Personal Information. This privacy statement applies solely to information collected by this web site.

Requests and Contact


Please contact us about this Privacy Notice or if you have any requests or questions relating to the privacy of your personal information.

Changes to this Privacy Notice


We may revise this Privacy Notice through an updated posting. We will identify the effective date of the revision in the posting. Often, updates are made to provide greater clarity or to comply with changes in regulatory requirements. If the updates involve material changes to the collection, protection, use or disclosure of Personal Information, Pearson will provide notice of the change through a conspicuous notice on this site or other appropriate way. Continued use of the site after the effective date of a posted revision evidences acceptance. Please contact us if you have questions or concerns about the Privacy Notice or any objection to any revisions.

Last Update: November 17, 2020