Table of Contents
- Microsoft SQL Server Defined
- Microsoft SQL Server Features
- Microsoft SQL Server Administration
Microsoft SQL Server Programming
- An Outline for Development
- Database Services
- Database Objects: Databases
- Database Objects: Tables
- Database Objects: Table Relationships
- Database Objects: Keys
- Database Objects: Constraints
- Database Objects: Data Types
- Database Objects: Views
- Database Objects: Stored Procedures
- Database Objects: Indexes
- Database Objects: User Defined Functions
- Database Objects: Triggers
- Database Design: Requirements, Entities, and Attributes
- Business Process Model Notation (BPMN) and the Data Professional
- Business Questions for Database Design, Part One
- Business Questions for Database Design, Part Two
- Database Design: Finalizing Requirements and Defining Relationships
- Database Design: Creating an Entity Relationship Diagram
- Database Design: The Logical ERD
- Database Design: Adjusting The Model
- Database Design: Normalizing the Model
- Creating The Physical Model
- Database Design: Changing Attributes to Columns
- Database Design: Creating The Physical Database
- Database Design Example: Curriculum Vitae
- The SQL Server Sample Databases
- The SQL Server Sample Databases: pubs
- The SQL Server Sample Databases: NorthWind
- The SQL Server Sample Databases: AdventureWorks
- The SQL Server Sample Databases: Adventureworks Derivatives
- UniversalDB: The Demo and Testing Database, Part 1
- UniversalDB: The Demo and Testing Database, Part 2
- UniversalDB: The Demo and Testing Database, Part 3
- UniversalDB: The Demo and Testing Database, Part 4
- Getting Started with Transact-SQL
- Transact-SQL: Data Definition Language (DDL) Basics
- Transact-SQL: Limiting Results
- Transact-SQL: More Operators
- Transact-SQL: Ordering and Aggregating Data
- Transact-SQL: Subqueries
- Transact-SQL: Joins
- Transact-SQL: Complex Joins - Building a View with Multiple JOINs
- Transact-SQL: Inserts, Updates, and Deletes
- An Introduction to the CLR in SQL Server 2005
- Design Elements Part 1: Programming Flow Overview, Code Format and Commenting your Code
- Design Elements Part 2: Controlling SQL's Scope
- Design Elements Part 3: Error Handling
- Design Elements Part 4: Variables
- Design Elements Part 5: Where Does The Code Live?
- Design Elements Part 6: Math Operators and Functions
- Design Elements Part 7: Statistical Functions
- Design Elements Part 8: Summarization Statistical Algorithms
- Design Elements Part 9:Representing Data with Statistical Algorithms
- Design Elements Part 10: Interpreting the Data—Regression
- Design Elements Part 11: String Manipulation
- Design Elements Part 12: Loops
- Design Elements Part 13: Recursion
- Design Elements Part 14: Arrays
- Design Elements Part 15: Event-Driven Programming Vs. Scheduled Processes
- Design Elements Part 16: Event-Driven Programming
- Design Elements Part 17: Program Flow
- Forming Queries Part 1: Design
- Forming Queries Part 2: Query Basics
- Forming Queries Part 3: Query Optimization
- Forming Queries Part 4: SET Options
- Forming Queries Part 5: Table Optimization Hints
- Using SQL Server Templates
- Transact-SQL Unit Testing
- Index Tuning Wizard
- Unicode and SQL Server
- SQL Server Development Tools
- The SQL Server Transact-SQL Debugger
- The Transact-SQL Debugger, Part 2
- Basic Troubleshooting for Transact-SQL Code
- An Introduction to Spatial Data in SQL Server 2008
- Performance Tuning
- Practical Applications
- Professional Development
- Application Architecture Assessments
- Business Intelligence
- Tips and Troubleshooting
- Additional Resources
Transact-SQL: Inserts, Updates, and Deletes
Last updated Mar 28, 2003.
I've been explaining the basics of Transact-SQL (T-SQL), the native language that SQL Server uses to allow you to perform actions on data and structures. The statements you use to change or add data are called "Data Manipulation Language" (DML) statements. The statements you use to change the structures in the database like tables are called "Data Definition Language" (DDL) statements. Statements that you use to control access to the data are called "Data Control Language" (DCL) statements. The type I'll focus on in this article is DML.
After you've learned to perform complex selects and joins on the data in your tables, the next step is to learn how to insert, update and delete data.
In this article I'll cover the various ways that you can get data into you tables. You'll learn to change the data based on various criteria, and finally I'll show you how to delete some data based on other conditions, or all of it in one go.
I'll cover the exception to this rule in the next tutorial, when I explain transactions and batches. For now, make sure you create a backup of your test database before you try out these commands.
The concept here is quite simple. In an insert operation, you add rows into an existing table. The format of the command looks like this:
INSERT INTO sometable VALUES ('value1', 'value2'...)
Actually, the INTO part is optional in Transact-SQL (T-SQL), the dialect of SQL that SQL Server uses. Starting with SQL Server 2008, you can even add more than one row of data at a time, which looks like this:
INSERT INTO sometable VALUES ('value1', 'value2'...), ('value3', 'value4'...)
And so on. You can also still follow the same process as in earlier versions, by simply using multiple INSERT statements.
The command above inserts a row of data with the values provided. Let's take a concrete example. The command below inserts a row of data into a table you've made called NewAuthors in the pubs sample database (which you can learn more about here):
INSERT INTO NewAuthors VALUES( '111-222-2222' , 'Woody' , 'Buck' , '123-456-7890' , '123 Here Street' , 'Tampa' , 'FL' , 12345)
Notice that some columns have the single "tick marks" around them, and others don't. Certain values (numbers, specifically) don't require the delimiters. As a matter of fact, if I include them, the system will generate an error, because it assumes that I'm trying to put a text value, which needs those tick marks, into a numeric field.
While I'm on that subject, if you make an insert and it fails for whatever reason, none of the rows will go in. You'll learn more about that when I explain those transactions and batches I mentioned earlier.
The INSERT command shown above makes a couple of assumptions. It assumes that I know the order of the columns that I want to insert data into, and it assumes I want something in all of them. There are times, of course, when I just want to insert data in a few columns.
To add data to just a column or two, you can use the following syntax:
INSERT INTO sometable (ColumnName1, ColumnName2, ...) VALUES ('value1', 'value2'...)
This format is actually acceptable even if I want to specify all the columns. As a matter of fact, this is the most accepted way to perform an insert. This format allows you to specify the order, leave out columns, and is just generally safer.
You can insert data into columns that take or require a value, but if you leave the column out and it doesn't allow NULLs, the insert will fail. When the table is designed, it's important to make careful decision about the NULL-ability of a column. This is a good thing, and helps you prevent data issues from occurring even if the user or program doesn't consider it. If a column does not allow NULL, the next important decision is whether it's safe to set a default value for that field. Let's consider a concrete example.
Let's say I have a simple table called Orders that stores sales records at one of those "everything's a dollar" kind of store. It has the following fields:
OrderNumber ItemNumber ItemCost ItemTax
In the table I'll require an order number and an item number, and I'll also make sure that ItemCost does not allow NULLs.
Sometimes, though, everything isn't really a dollar. Some items are so small they sell at two or three for a dollar. But even those items are tracked individually, so their unit price isn't quite a dollar. For that reason, I have to allow the user to enter a price. For most items, however, the price is in fact just one dollar, so it's a good idea to have a default value of 1.00. This saves typing on entry, and once again makes the data cleaner. Simply by setting the data type to a number and not allowing the value to be NULL, I've made the system enforce a good data set.
The store doesn't always have taxes applied to an item, for instance when they sell items over the Internet or to a charity. (Okay, not many people shop on the Internet from a dollar store, but indulge me for a moment.)
With the way I have things set up right now, the following statement would perform a legal insert:
INSERT INTO OrderTable VALUES (123, 765)
Let's extend this example a little further. If the primary key on this table is the OrderNumber field, and that field is set to an Identity type (one that adds the next highest number automatically), then SQL Server generates number for the field. In that case, this would work:
INSERT INTO OrderTable (ItemNumber) VALUES (765)
Let's try it this way:
INSERT INTO OrderTable VALUES (765)
However, running that bit of code doesn't work at all. The reason is that OrderNumber is the first field, so I have to specify the second field name for the insert.
There's an interesting side-effect that is important to keep in mind when inserting data into a table that has an Identity column. During the insert, the Identity value will auto-increment, but if the data is later deleted, the numbers won't be re-used. That means that if I insert 1,000 rows into a table, the Identity column will increment by 1,000. If those rows are deleted, the next value will be 1,001 in the Identity column, even if there are only two values in the table. If you're examining that data later, that might surprise you.
You can also use sub-selects to populate data into a table. That means you can select the value of data with a standard statement similar to what I've shown you so far, and use the result of that as an insert into another table. This is a very common and powerful pattern of inserting data.
To demonstrate, I'll create a table in the pubs database to hold some sample data:
USE pubs GO CREATE TABLE TestTable(AuthorNames varchar(100) NULL) GO
Now I'll a SELECT statement to concatenate the first name, a space, and the last names from the authors table, as I've shown you in the previous article. I'll use that single new value to insert into my new table:
INSERT INTO TestTable SELECT au_fname + ' ' + au_lname FROM authors GO
Now let's see the results:
SELECT * FROM TestTable Here are the results: Abraham Bennet Reginald Blotchet-Halls Cheryl Carson Michel DeFrance ...
You can insert data from a SELECT statement and create a new destination table on the fly. Using this syntax, you can create a whole new table from a single insert statement, without creating the table first:
SELECT au_fname, au_lname INTO TestTable2 FROM authors; GO
Notice that there is no INSERT statement there - the SELECT…INTO does all the work. Databases that have "cubes" of data often use this process to create another table.
You can also insert data into a View using the same methods I've been using here - but if the View is made up of more than one table, you can only insert, update or delete data in one of those tables at a time.
After you insert the data, you will periodically need to make changes to it. There are two ways to do this: updating all data at once, or updating data based on conditions.
To update all rows at a time, you can use the following syntax to make the change on every column in the table:
UPDATE TestTable SET AuthorNames = 'Buck' GO
That statement (which I do not recommend you run) sets every name to "Buck". If you're making bulk changes, then this command is quite powerful. Notice that what follows the UPDATE command is the table name, and then the SET command follows that. Following SET is the column to update with an operator and then the value you want. If you want to update more than one column, you need to separate the SET operations and their values with a comma:
UPDATE TestTable2 SET au_fname = 'Buck', au_lname = 'Woody'
You can also update values using other operators. Let's return to our fictional table of dollar-store sales. To include an across-the-board tax change to all current orders, you would use this command:
UPDATE orders SET ItemTax = (ItemCost * .06)
This command would make the change to add a 6% sales tax on all items in the table.
Most often, though, what you're after is a smaller set of changes. In that case, you can restrict the columns that are updated with a WHERE clause, just as you did with a SELECT operation:
UPDATE TestTable2 SET au_lname = 'Woody' WHERE au_lname = 'Dunn'
You can use those sub-select operators in the WHERE clause as well, making a very powerful combination.
Remember the Identity column issue I explained earlier with inserts? The update operation doesn't work the same way. Any changes to a column don't affect the identity value. What's really interesting about this is that an update is really a delete operation then an insert, under the covers. Even so, it doesn't affect the identity column.
Let's talk about removing data once it's in the table. The command to use for this is DELETE, and the important thing to know about this command is that it removes a whole row of data, not a column. There is no column delete function; for that you would use the UPDATE command to set the column to a NULL value, if possible.
To delete all the rows in a table, you can use the following command:
DELETE FROM TestTable
To limit the deletions, use the WHERE condition just as you did with the SELECT and UPDATE statements. This command removes rows from the TestTable2 table where the last name is "Woody":
DELETE FROM TestTable2 WHERE au_lname = 'Woody'
This command removes all the rows in the table, since you used the UPDATE command earlier to set all rows to that value. You can also use joins and sub-selects to limit the selection of data you wish to delete. Just as before. Work slowly, and always test the sub-select statement first to ensure you're deleting the data you want.
So far all of the commands you've seen (INSERT, UPDATE and DELETE) log each activity in the transaction log. This is to ensure data integrity. But, if the table is quite large, this can take a long time. It also causes the log to grow to the size of the batches of operations.
To keep the structure of a table but remove all the rows of data, there's another command you can use:
TRUNCATE TABLE TestTable2
TRUNCATE's big advantage is that it doesn't affect the transaction log the same way, which makes it very fast. The disadvantages are that it's irreversible, and it doesn't permit any conditions. It just removes all rows from the table, no questions asked.
Along with the SELECT command, the INSERT, UPDATE and DELETE statements are some of the most important commands in T-SQL. Practice with them in a test system and test database until you're comfortable with all of them - you'll be using them for years to come, whether you're a Database Administrator or a Data Developer.