Data Dictionary
In database management systems, a file that defines the basic organization of a database. A data dictionary contains a list of all files in the database, the number of records in each file, and the names and types of each field. Most database management systems keep the data dictionary hidden from users to prevent them from accidentally destroying its contents.
Data dictionaries do not contain any actual data from the database, only bookkeeping information for managing it. Without a data dictionary, however, a database management system cannot access data from the database.
Currsor
In database packages, a cursor comprises a control structure for the successive traversal (and potential processing) of records in a result set.
Cursors provide a mechanism by which a database client iterates over the records in a database. Using cursors, the client can get, put, and delete database records. Database programmers use cursors for processing individual rows returned by the database system for a query. Cursors address the problem of impedance mismatch, an issue that occurs in many programming languages[citation needed]. Most procedural programming languages do not offer any mechanism for manipulating whole result-sets at once. In this scenario, the application must process rows in a result-set sequentially. Thus one can think of a database cursor as an iterator over the collection of rows in the result set.
Several SQL statements do not require the use of cursors. That includes the INSERT statement, for example, as well as most forms of the DELETE and UPDATE statements. Even a SELECT statement may not involve a cursor if it is used in the variation of SELECT INTO. A SELECT INTO retrieves at most a single row directly into the application.
SQL Server is very good at handling sets of data. For example, you can use a single UPDATE statement to update many rows of data. There are times when you want to loop through a series of rows a perform processing for each row. In this case you can use a cursor.
Please note that cursors are the SLOWEST way to access data inside SQL Server. The should only be used when you truly need to access one row at a time. The only reason I can think of for that is to call a stored procedure on each row. In the Cursor Performance article I discovered that cursors are over thirty times slower than set based alternatives.
Read more about Cursors
http://www.sqlteam.com/article/cursors-an-overview
http://en.wikipedia.org/wiki/Cursor_(databases)
Referential Integrity
feature provided by relational database management systems (RDBMS’s) that prevents users or applications from entering inconsistent data. Most RDBMS’s have various referential integrity rules that you can apply when you create a relationship between two tables.
For example, suppose Table B has a foreign key that points to a field in Table A. Referential integrity would prevent you from adding a record to Table B that cannot be linked to Table A. In addition, the referential integrity rules might also specify that whenever you delete a record from Table A, any records in Table B that are linked to the deleted record will also be deleted. This is called cascading delete. Finally, the referential integrity rules could specify that whenever you modify the value of a linked field in Table A, all records in Table B that are linked to it will also be modified accordingly. This is called cascading update.
Data Modeling
Data modeling in software engineering is the process of creating a data model by applying formal data model descriptions using data modeling techniques.
Data modeling is a method used to define and analyze data requirements needed to support the business processes of an organization. The data requirements are recorded as a conceptual data model with associated data definitions. Actual implementation of the conceptual model is called a logical data model. To implement one conceptual data model may require multiple logical data models. Data modeling defines not just data elements, but their structures and relationships between them Data modeling techniques and methodologies are used to model data in a standard, consistent, predictable manner in order to manage it as a resource. The use of data modeling standards is strongly recommended for all projects requiring a standard means of defining and analyzing data within an organization.
Read More
http://en.wikipedia.org/wiki/Data_modeling
Table Scan vs. Nonclustered Index Access
Index Scan retrieves all the rows from the table. Index Seek retrieves selective rows from the table.
Index Scan:
Since a scan touches every row in the table whether or not it qualifies, the cost is proportional to the total number of rows in the table. Thus, a scan is an efficient strategy if the table is small or if most of the rows qualify for the predicate.
Index Seek:
Since a seek only touches rows that qualify and pages that contain these qualifying rows, the cost is proportional to the number of qualifying rows and pages rather than to the total number of rows in the table.
Whether a table scan will perform better than a nonclustered index access depends on the table size, the row’s storage pattern, the row’s length, and the percentage of rows the query returns. First, because a table scan can profit more from the read-ahead process for a large table, the percentage of returned rows for an efficient table scan in a large table is lower than the percentage in medium-sized tables. Second, processing I/O operations logically (i.e., using a nonclustered index) depends on the row’s physical location on the disk. If the rows are widely dispersed, the number of I/O operations will be high and the percentage of returned rows where table scan is more efficient than the corresponding nonclustered index access will be lower than in a case where the row’s storage pattern is contiguous. Third, row length influences how many rows can fit into a single data page. When more rows fit on a page, read I/O operations during a table scan perform better than when only a few rows fit on a single page.
Read More:
http://blog.sqlauthority.com/2007/03/30/sql-server-index-seek-vs-index-scan-table-scan/
http://msdn.microsoft.com/en-us/library/aa224773(SQL.80).aspx
Triggers
Triggers are special types of Stored Procedures that are defined to execute automatically in place of or after data modifications. They can be executed automatically on the INSERT, DELETE and UPDATE triggering actions.
What is the difference between clustered and nonclustered indexes?
There are clustered and nonclustered indexes. A clustered index is a special type of index that reorders the way records in the table are physically stored. Therefore table can have only one clustered index. The leaf nodes of a clustered index contain the data pages.
A nonclustered index is a special type of index in which the logical order of the index does not match the physical stored order of the rows on disk. The leaf node of a nonclustered index does not consist of the data pages. Instead, the leaf nodes contain index rows.