Quantcast
Channel: Teradata Downloads
Viewing all 780 articles
Browse latest View live

Teradata 14.0 for Business (Webcast)

$
0
0

Explore the features and functions available in Teradata 14.0 that will have a direct impact on the business community!

The five major areas of  Quality/Supportability, Performance, Active Enable, and Ease of Use will be covered including: Teradata Columnar; enhanced calendar capabilities for temporal; two new data types; and the new workload resource management architecture.

Presenter: Alison Torres, Director Teradata Warehouse Consulting – Teradata

Audience: 
Data Warehouse Business Users
Training details
Course Number: 
50655
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Limiting BTEQ's Verbosity via ECHOREQ ERRORONLY

$
0
0
Cover Image: 

Do you need to reduce the amount of informational messages that BTEQ produces? Then try out the enhanced version of the BTEQ 14.10 command "SET ECHOREQ".

This article explains how to use ECHOREQ command's ERRORONLY option to instruct BTEQ to only echo back SQL requests that fail.

Why add a new ERRORONLY option to the ECHOREQ command?

Consider a large BTEQ script that contains several hundred SQL statements and that the corresponding BTEQ output needs to be retained. The script can use the ECHOREQ OFF setting to reduce the standard output size. This inhibits all the SQL requests from being echoed, but at the same time, it becomes nearly impossible to identify a failed request.

This scenario served as the primary reason to enhance BTEQ's ECHOREQ command with a new ERRORONLY option, so that the need of suppressing echoed requests is met, while allowing SQL statements to be displayed only when they fail.

After reading this article you will know how to control BTEQ's echoing of user input in order to reduce the size of standard output, while also being able to echo SQL statements selectively based on their success or failure. You will also see how to use BTEQ's SHOW command.


User input can contain BTEQ commands and SQL requests. Whenever user input is received by BTEQ (either in interactive or batch mode), BTEQ by default displays an exact copy of the input to the standard output stream. The only exceptions to this are for protected mode input, such as the LOGON command's password and LOGDATA command's data values.

The "echo required" function (ECHOREQ command) can be used to control BTEQ's echoing of the user input. The ECHOREQ command accepts three values: ON, OFF, and ERRORONLY. If no option is specified, BTEQ assumes ON.

Note that, the ECHOREQ command is not related to the QUIET command, which is used to suppress a different category of informational messages, altogether.

The behavior of all the options will be demonstrated throughout this article using a static BTEQ script named echoreq.bteq as shown below:

/* Display the current ECHOREQ setting */ 
.SHOW CONTROLS ECHOREQ 
/* Perform LOGON */ 
.LOGON nodeid/dbc,dbc 
/* Successful SQL statement */ 
SELECT date; 
/* Erroneous SQL statement */ 
SELECT junk; 
/* Successful BTEQ command */ 
.SET WIDTH 72 
/* Erroneous BTEQ command */ 
.SET WIDTH XX 
/* Display last SQL sent to database */ 
.SHOW 
/* Exit BTEQ */ 
.QUIT

Note: SHOW is a new BTEQ command, which can be used to display the last SQL request that was sent to the database. For more details on using the SHOW command, please refer to the Basic Teradata Query Reference Manual.

Using ECHOREQ ON

By default, ECHOREQ is set to ON, in which case, BTEQ will echo all BTEQ commands and SQL requests to standard output.

Using either of the following variants will set ECHOREQ to ON:
.SET ECHOREQ ON
.SET ECHOREQ

Below is BTEQ's output with ECHOREQ initially set to ON.
Executed as: bteq .SET ECHOREQ ON < echoreq.bteq

+---------+---------+---------+---------+---------+---------+---------+----
.SET ECHOREQ ON
+---------+---------+---------+---------+---------+---------+---------+----
/* Display the current ECHOREQ setting */
.SHOW CONTROLS ECHOREQ
 
[SET] ECHOREQ = ON
 
+---------+---------+---------+---------+---------+---------+---------+----
.LOGON nodeid/dbc,
 
*** Logon successfully completed.
*** Teradata Database Release is 14.10.00.00
*** Teradata Database Version is 14.10.00.00
*** Transaction Semantics are BTET.
*** Session Character Set Name is 'ASCII'.
 
*** Total elapsed time was 7 seconds.
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Successful SQL statement            */
SELECT date;
 
*** Query completed. One row found. One column returned.
*** Total elapsed time was 1 second.
 
    Date
--------
12/03/14
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Erroneous SQL statement             */
SELECT junk;
*** Failure 3822 Cannot resolve column 'junk'. Specify table or view.
                Statement# 1, Info =0
*** Total elapsed time was 1 second.
 
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Successful BTEQ command             */
.SET WIDTH 72
+---------+---------+---------+---------+---------+---------+---------+-
/* Erroneous BTEQ command              */
.SET WIDTH XX
*** Syntax Error : Width command must be followed by a number.
+---------+---------+---------+---------+---------+---------+---------+-
/* Display last SQL sent to database   */
.SHOW
 
SELECT junk;
 
+---------+---------+---------+---------+---------+---------+---------+-
.QUIT
*** You are now logged off from the DBC.
*** Exiting BTEQ...
*** RC (return code) = 8

Using ECHOREQ OFF

You can use "SET ECHOREQ OFF" if you do not want any BTEQ command or SQL request to be echoed by BTEQ. This setting can be useful when running large BTEQ scripts where the standard output file size is a concern and one wants to reduce BTEQ's informational output.

Note: The following are not affected by the ECHOREQ setting, and will be echoed regardless:
- A stand-alone BTEQ comment (which is not embedded in a BTEQ command or SQL request)
- The LOGON command (of course, the password is never echoed)

Below is BTEQ's output with ECHOREQ initially set to OFF.
Executed as: bteq .SET ECHOREQ OFF < echoreq.bteq

+---------+---------+---------+---------+---------+---------+---------+----
.SET ECHOREQ OFF
+---------+---------+---------+---------+---------+---------+---------+----
/* Display the current ECHOREQ setting */
 
[SET] ECHOREQ = OFF
 
+---------+---------+---------+---------+---------+---------+---------+----
.LOGON nodeid/dbc,
 
*** Logon successfully completed.
*** Teradata Database Release is 14.10.00.00
*** Teradata Database Version is 14.10.00.00
*** Transaction Semantics are BTET.
*** Session Character Set Name is 'ASCII'.
 
*** Total elapsed time was 6 seconds.
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Successful SQL statement            */

*** Query completed. One row found. One column returned.
*** Total elapsed time was 1 second.
 
    Date
--------
12/03/14
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Erroneous SQL statement             */
*** Failure 3822 Cannot resolve column 'junk'. Specify table or view.
                Statement# 1, Info =0
*** Total elapsed time was 1 second.
 
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Successful BTEQ command             */
+---------+---------+---------+---------+---------+---------+---------+-
/* Erroneous BTEQ command              */
*** Syntax Error : Width command must be followed by a number.
+---------+---------+---------+---------+---------+---------+---------+-
/* Display last SQL sent to database   */
 
SELECT junk;
 
+---------+---------+---------+---------+---------+---------+---------+-
*** You are now logged off from the DBC.
*** Exiting BTEQ...
*** RC (return code) = 8

Using ECHOREQ ERRORONLY

You saw above that when using ECHOREQ OFF, no user input is echoed back. What if you wanted BTEQ to echo out the failed SQL request?

You can use "SET ECHOREQ ERRORONLY", instructing BTEQ to echo back only the failed SQL requests. The echoed SQL request is printed just after the database Failure/Error message is displayed. The ERRORONLY option is available starting with BTEQ 14.10.

With this new ERRORONLY option, all BTEQ commands will still be echoed. However, this behavior of BTEQ commands equating to ON for ERRORONLY is subject to change. In a future release, it is expected that BTEQ commands will behave the same way as failed SQL requests, in that, successful BTEQ commands will not be echoed, but failed BTEQ commands will be.

Note: When ERRORONLY is in effect and the SQL request is issued with a REPEAT factor greater than 1, it may be the case that more than one error response is generated. So for the duration of the REPEAT, only one instance of the request text will be echoed.
 
Below is BTEQ's output with ECHOREQ initially set to ERRORONLY.
Executed as: bteq .SET ECHOREQ ERRORONLY < echoreq.bteq

+---------+---------+---------+---------+---------+---------+---------+----
.SET ECHOREQ ERRORONLY
+---------+---------+---------+---------+---------+---------+---------+----
/* Display the current ECHOREQ setting */
.SHOW CONTROLS ECHOREQ
 
[SET] ECHOREQ = ERRORONLY
 
+---------+---------+---------+---------+---------+---------+---------+----
.LOGON nodeid/dbc,
 
*** Logon successfully completed.
*** Teradata Database Release is 14.10.00.00
*** Teradata Database Version is 14.10.00.00
*** Transaction Semantics are BTET.
*** Session Character Set Name is 'ASCII'.
 
*** Total elapsed time was 7 seconds.
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Successful SQL statement            */
 
*** Query completed. One row found. One column returned.
*** Total elapsed time was 1 second.
 
    Date
--------
12/03/14
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Erroneous SQL statement             */
*** Failure 3822 Cannot resolve column 'junk'. Specify table or view.
                Statement# 1, Info =0
 
*** Request Text:
SELECT junk;
 
*** Total elapsed time was 1 second.
 
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Successful BTEQ command             */
.SET WIDTH 72
+---------+---------+---------+---------+---------+---------+---------+-
/* Erroneous BTEQ command              */
.SET WIDTH XX 
*** Syntax Error : Width command must be followed by a number.
+---------+---------+---------+---------+---------+---------+---------+-
/* Display last SQL sent to database   */
.SHOW
 
SELECT junk;
 
+---------+---------+---------+---------+---------+---------+---------+-
.QUIT
*** You are now logged off from the DBC.
*** Exiting BTEQ...
*** RC (return code) = 8

Notice the BTEQ message " *** Request Text:" followed by the failed SQL request, which gets displayed after the database Failure message.

What if you do not have a BTEQ version that supports ECHOREQ ERRORONLY?

In order to get a similar ERRORONLY-like benefit from an older version of BTEQ, try using the ECHOREQ OFF setting and modify your BTEQ script to use the IF-THEN command to issue a SHOW command after every SQL request.

For example, the script echoreq.bteq can be modified (named echoreq_erroronly_workaround.bteq) as below:

/* Display the current ECHOREQ setting */
.SHOW CONTROLS ECHOREQ
/* Perform LOGON                       */
.LOGON nodeid/dbc,dbc
/* Successful SQL statement            */
SELECT date;
/* Check if the SQL statement failed   */
.IF ERRORCODE <> 0 THEN .RUN FILE = display_last_SQL.run
/* Erroneous SQL statement             */
SELECT junk;
/* Check if the SQL statement failed   */
.IF ERRORCODE <> 0 THEN .RUN FILE = display_last_SQL.run
/* Successful BTEQ command             */
.SET WIDTH 72
/* Erroneous BTEQ command              */
.SET WIDTH XX 
/* Display last SQL sent to database   */
.SHOW
/* Exit BTEQ                           */
.QUIT

where, display_last_SQL.run file's content is shown below:

.SET ECHOREQ ON
.SHOW
.SET ECHOREQ OFF

Prior to the BTEQ 14.10 release, the SHOW command does not display anything if ECHOREQ is set to OFF, which is why an ON-OFF pair is needed in the above RUN file.
 
Below is BTEQ's output with ECHOREQ initially set to OFF and using the workaround to display the failed SQL requests.
Executed as: bteq .SET ECHOREQ OFF < echoreq_erroronly_workaround.bteq

+---------+---------+---------+---------+---------+---------+---------+----
.SET ECHOREQ OFF
+---------+---------+---------+---------+---------+---------+---------+----
/* Display the current ECHOREQ setting */
 
[SET] ECHOREQ = OFF
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Perform LOGON                       */
.LOGON nodeid/dbc,
 
*** Logon successfully completed.
*** Teradata Database Release is 14.10.00.00
*** Teradata Database Version is 14.10.00.00
*** Transaction Semantics are BTET.
*** Session Character Set Name is 'ASCII'.
 
*** Total elapsed time was 7 seconds.
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Successful SQL statement            */
 
*** Query completed. One row found. One column returned.
*** Total elapsed time was 1 second.
 
    Date
--------
12/03/14
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Check if the SQL statement failed   */
+---------+---------+---------+---------+---------+---------+---------+----
/* Erroneous SQL statement             */
*** Failure 3822 Cannot resolve column 'junk'. Specify table or view.
                Statement# 1, Info =0
*** Total elapsed time was 1 second.
 
 
+---------+---------+---------+---------+---------+---------+---------+----
/* Check if the SQL statement failed   */
+---------+---------+---------+---------+---------+---------+---------+----
+---------+---------+---------+---------+---------+---------+---------+----
.SHOW
 
SELECT junk;
 
+---------+---------+---------+---------+---------+---------+---------+----
.SET ECHOREQ OFF
+---------+---------+---------+---------+---------+---------+---------+----
*** Warning: EOF on INPUT stream.
+---------+---------+---------+---------+---------+---------+---------+----
/* Successful BTEQ command             */
+---------+---------+---------+---------+---------+---------+---------+-
/* Erroneous BTEQ command              */
*** Syntax Error : Width command must be followed by a number.
+---------+---------+---------+---------+---------+---------+---------+-
/* Display last SQL sent to database   */
 
 
+---------+---------+---------+---------+---------+---------+---------+-
/* Exit BTEQ                           */
*** You are now logged off from the DBC.
*** Exiting BTEQ...
*** RC (return code) = 8

Concluding remarks

Hopefully, reading through this article has given you a better understanding of how to control the echoing of requests to the output file, using the ECHOREQ command. There are other BTEQ commands that can also further reduce BTEQ's output size, such as SET QUIET and SET TIMEMSG. For more details, refer to the Basic Teradata Query Reference Manual (Publication B035-2414) from http://www.info.teradata.com.

Channel: 
Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Hands-on with Teradata Aster Express

$
0
0

The new Teradata Aster Express virtual images bring the powerful analytics of the Aster platform to any PC or workstation. 

With these easy-to-install and freely downloaded images, everyone can now experiment with SQL-MR queries using their own data (or the sample data included with the Teradata Aster tutorials).

In this session, we'll show just how easy it is to install and run these virtual images and also walk through some sample data sets and sample queries to demonstrate the power of Teradata Aster's SQL-MR library of analytic components. 

This hands-on session should give everyone in the audience enough information to get started with their own Teradata Aster cluster and the tools to begin their own "discovery analytics" on their big data.

Note: This was a 2012 Teradata Partners Conference session.

Presenters:
Mike Riordan, Solution Architect – Teradata Corporation
Eric Linden, Manager of Technical Product Marketing - Teradata Aster Corporation

Audience: 
Data Warehouse Analytical Modeler, Data Warehouse Business User
Training details
Course Number: 
50589
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Teradata Mapping Manager

$
0
0
Channel: 
Full user details required: 
Full user details not required

Teradata Mapping Manager is a Java-based desktop application/tool used by our professional services consultants to aid in the mapping of data and requirements. The mapping metadata is stored in a Teradata database.

It is available for use by Teradata licensees at no charge although it is not covered by normal Teradata software maintenance and support agreements.

Other documentation:

  1. A list of new features and/or bug fixes in the latest release is available in the "What's new in TMM?" article.
  2. A high-level list of application features is available in the Data Sheet(link above)
  3. Installation and other information is available in the TMM Getting Started guide (link above).
  4. Self-paced, hands-on training available in TMM Tutorial(link above)

For community support, please visit the Tools Forum.

Ignore ancestor settings: 
Respect ancestor settings
Apply supersede status to children: 
Ignore children

Using the Apache Derby ij tool with the Teradata JDBC Driver

$
0
0

Oracle (Sun) JDK 6.0 and JDK 7.0 include the Apache Derby database, which is implemented in 100% Java. 

This article is not about the Apache Derby database; instead, this is about a command-line interactive SQL tool that is included with Apache Derby. 

The Apache Derby "ij" interactive SQL tool is quite useful. It is a command-line SQL tool similar to BTEQ, and it works with any JDBC Driver and any database, not just the Apache Derby database. 

The following instructions show how to use the ij tool with the Teradata JDBC Driver. 

First, you need to have a JDK installed (not just a JRE) that includes the Apache Derby jar files. The Apache Derby jar files are typically located in the db/lib directory under the main JDK install directory. (Note that some builds of JDK 6.0 have problems with installing the Apache Derby files.) 

Second, you need to have the Teradata JDBC Driver jar files terajdbc4.jar and tdgssconfig.jar available. 

On Windows

Assuming that your JDK 6.0 or 7.0 is installed in directory c:\jdk and that your Teradata JDBC Driver jar files are located in c:\terajdbc 

c:\jdk\bin\java -cp "c:/terajdbc/terajdbc4.jar;c:/terajdbc/tdgssconfig.jar;c:/jdk/db/lib/derbytools.jar" org.apache.derby.tools.ij

On UNIX or Linux

Assuming that your JDK 6.0 or 7.0 is installed in directory /usr/jdk and that your Teradata JDBC Driver jar files are located in /usr/terajdbc

/usr/jdk/bin/java -cp "/usr/terajdbc/terajdbc4.jar:/usr/terajdbc/tdgssconfig.jar:/usr/jdk/db/lib/derbytools.jar" org.apache.derby.tools.ij

 

After ij starts, it prints a command prompt "ij>". The interactive commands are the same regardless of which platform you are running on.

Commands in ij must be terminated with a semicolon, just like with BTEQ. You can obtain interactive help with the "help;" command.

ij version 10.2
ij> driver 'com.teradata.jdbc.TeraDriver';
ij> connect 'jdbc:teradata://mydbhost/TMODE=ANSI' user 'joe' password 'please';
ij> select current_timestamp;
Current TimeStamp(6)
--------------------------------
2008-01-16 16:03:46.4

1 row selected
ij> disconnect;
ij> exit;

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Statistics Collection in Teradata 14.0

$
0
0

Statistics about tables, columns and indexes are a critical component in producing good query plans.

This session will examine the different options that are available when collecting statistics, with an emphasis on the new USING clauses and other enhancements in Teradata 14.0.  Statistics histograms will be detailed, and the important role that SUMMARY statistics play will be emphasized.   The improved statistics extrapolation process and the ability to combine statistics collections into a single table scan will be covered.  The session will wrap up with some general recommendations for statistics collection in Teradata 14.0.

Key Points:

  • Learn the differences between statistics collection approaches
  • Find out how the new Teradata 14.0 enhancement can benefit you
  • Understand the new statistics collection recommendations for 14.0

Teradata Release information: Teradata 14.0

Presenter: Carrie Ballinger, Software Engineer – Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Application Specialist; Data Warehouse Architect/Designer, Data Warehouse Technical Specialist
Training details
Course Number: 
50654
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
2
Channel: 

Calculation of Table Hash Values to Compare Table Content

$
0
0
Short teaser: 
To be able to compare table content between different systems a table hash function is needed.
Cover Image: 
AttachmentSize
2289_Arndt.pdf959.88 KB

At the Partners 2012 I presented the session “How to compare tables between TD systems in a dual active environment” - see attached slides for details if of interest for you.

The main emphasis was and is to draw attention to an issue which becomes more important as more and more customers are using multiple Teradata instances. Either as dual active or by using different appliances for different purposes (e.g. online archive plus EDW instance). If you expect to have the same data on two or more systems, the question can be asked on how to PROOF that the data is really the same – joining is not possible.

Current approaches like calculating column measures have major limitations especially for big tables – see the presentation for details. Unfortunately classical hash algorithm like SHA1 or MD5 – which had been designed especially for that purpose - can not be used within the DB as native functions on table level as they require the same ordering of data which can not be guaranteed at two different systems. An option is to extract the whole data sorted and run the SHA1 on the output file. But this leads to major costs for the sorts and network traffic.

A new way to overcome these limitations is a new table hash function which overcomes the ordering requirement of the SHA1 and MD5 hash functions.
The main idea is based on a two step approach:
1Calculate for each row a hash value where the outcome has the classical hash properties.
2“Aggregate” the row hash values by an aggregate function where the result does not depend on the ordering of the input (commutative and associative).

The resulting output can be used as a table hash to compare data content between different systems.

The main considerations to choose the right component function are:
for 1. avoid of hash collision, which require reasonable long hash values for the row hash. This is for example an issue if you wane use the internal hashrow function of Teradata as hash collisions occur even for small tables.
for 2. use a good function which is also handling multiset tables (where XOR) has an issue.
We implemented a C UDF and compared the results with export of data options and see already with this implementation competitive resource consumption figures.

During the last months we have spent more R&D on this and have found a better hash function for the row hash calculation. This reduces the CPU consumption to about 60% in comparison to the SHA1 hash function we used before. In addition we found a better aggregation function which overcomes limitations of the ADD MOD 2^Hash length function.

 

As discussed already in the Partner 2012 presentation the best performance for this kind of functions should be achieved in case the function would be implemented as a core DB function similar to the hashrow function. This would also improve usability as the described data preparation is not needed. But it is likely that Teradata will only implement this with a strong demand from the customer side.

In the mean time – and here starts the sales pitch – if you need to proof that data on two different systems are the same or you wane discuss any of this in more detail then contact me.

There are also additional use cases for this function which you should also take into consideration e.g. regression testing of DB versions, systems and your own software.
So in summary - be aware of the issues and consider the alternatives!

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

The Revenge of Brick and Mortar

$
0
0
Cover Image: 

Darryl McDonald, President of Teradata Applications, tweeted this link today:  a practical example of when many of the subjects discussed on this blog come together. 

In a sentence, Wal*Mart (I’m still addicted to the old name) is using in-store real-time geofencing to push offers to customers.  And, as Apple has known for years, the use of Wal*Mart eco-system alone has the virtuous result of more sales and increased customer stickiness.  "This new breed of mobile-empowered customers is good news for us," Thomas said. "Compared to nonapp users, customers with a Wal-Mart app make two more shopping trips a month to our stores and spend nearly 40 percent more each month."

Pair this with Wal*Mart’s huge store of information about each customer’s shopping habits and desires, coupled with like customer’s data, and they can anticipate needs that the customer didn’t realize that they had.  We’ve come a long way from stacking the beer next to the diapers.

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Aster Client Tools

How to use the Teradata JDBC Driver with R

$
0
0

JDBC support has been added to teradataR version 1.0.1 which provides the ability of using the Teradata JDBC Driver through teradataR to connect to the Teradata database.  teradataR allows R users to easily connect to a Teradata database and use statistical functions directly against the Teradata system without having to extract memory into data.  For more information on teradataR, refer to the teradataR User Guide.

What is R?  R is a language and environment for statistical computing and graphics that runs on UNIX, Windows, and MacOS platforms.

To set up the Teradata JDBC Driver to use with teradataR:

  1. Ensure that you have a Java Runtime Environment (JRE) installed on your client system.  Set the environment variable JAVA_HOME to the directory of the JRE. 
  2. Download and extract the Teradata JDBC Driver to a directory of your choice.  Note the directory of the extracted files as this will be used later to connect to the Teradata database as the ClasspathForTeradataJDBCDriverFiles.
  3. Download and install R[1] from http://www.r-project.org/
  4. Download and install teradataR according to the instructions in the teradataR User Guide

Ensure that the RJDBC, DBI, and rJava packages are installed according to the instructions in the teradataR User Guide in order to use JDBC.

 

Connect to the Teradata database using a teradataR JDBC session:

  1. Open the RGui
  2. Click on Packages-->Load package…
  3. Select “RJDBC” and click OK
  4. Repeat step #2
  5. Select “teradataR” and click OK

teradataR is now ready to use the Teradata JDBC driver to connect to the Teradata database.

 

 

Using the R Console, enter the following steps below to make a Teradata connection:

  1. drv = JDBC("com.teradata.jdbc.TeraDriver","ClasspathForTeradataJDBCDriverFiles")<enter>

         Example: drv = JDBC("com.teradata.jdbc.TeraDriver","c:\\terajdbc\\terajdbc4.jar;c:\\terajdbc\\tdgssconfig.jar")

         NOTE: A path on a UNIX machine would use single forward slashes to separate its components and a colon between files.

 

  1. conn = dbConnect(drv,"jdbc:teradata://DatabaseServerName/ParameterName=Value","User","Password") <enter>

         Example: conn = dbConnect(drv,"jdbc:teradata://jdbc1410ek1.labs.teradata.com/TMODE=ANSI,LOGMECH=LDAP","guestldap","passLDAP01")

         NOTE: Connection parameters are optional. The first ParameterName is separated from the DatabaseServerName by a forward slash character.

 

  1. dbGetQuery(conn,"SQLquery")

         Example: dbGetQuery(conn,"select ldap from dbc.sessioninfov where sessionno=session")

 

 

 

To be able to specify JDBC connection URL parameters, use the JDBC and dbConnect commands as shown above instead of the the tdConnect command as specified in the teradataR User Guide.

For further examples of how to use teradataR, refer to the teradataR User Guide.


 


 

[1] This article is written using R 2.15.3 for Windows.

R 3.0.1 is not compatible with teradataR 1.0.1 and will return an error stating that teradataR was built before R 3.0.0 when trying to install teradataR.

 

 

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

TASM State Changes are Streamlined in Teradata 14.0

$
0
0

A state matrix is a construct that allows you to intersect your business processing windows with the health conditions of your system.  Why should you care about this abstraction?  The state matrix in TASM is the mechanism that supports an automated change in workload management setup as you move through your processing day, or should your system become degraded.  Not only is it automatic, but it's a significantly more efficient way to make a change in setup while work is running.

First, consider the two categories that are intersected in our state matrix:

  1. “Planned environments” represent different times of day when business priorities are expected to be different, such as night and day; weekday and weekend; regular, month-end or year-end. In the graphic below you can see that planned environments are on the horizontal axis of the state matrix.
  2. “Health conditions” represent the robustness of the system. When a node is down you might want throttle limits to be set differently, or priority setup to shift. Health conditions make up the vertical axis of the matrix. There may be two, possibly three health conditions that you want to treat differently in terms of workload management.

The intersection of these two categories, both which can necessitate setup changes, is referred to as a state. Although a simple state matrix is supplied by default, you will need to define your own specific planned environments and health conditions if you wish to make use of automated changes to workload management.  Once you define a state in Viewpoint Workload Designer, you can associate it to one or several different intersections, as shown in the figure above.

 

As mentioned above, a health condition or a planned environment can change as time passes or as the health of the system suddenly degrades. They can also change as result of a system event that the DBA sets up using TASM.   For example, you can define an event that will be triggered when Available AWTs reach a specified low level on some number of AMPs.  When you define a system event you give it an “action”, and one action could be to switch to a planned environment that throttles back low priority work. 

The figure below shows the workload attributes that can be modified when a state changes.

 

In the initial releases of TASM, users on busy systems sometimes experienced a delay in waiting for a state change to complete.  When going through a state change in releases prior to Teradata 14.0, a non-trivial level of internal work had to be performed:  All of the internal TASM tables that define the ruleset had to be re-read, all of the TASM caches had to be rebuilt, the delay queues were completely flushed, and all of the running queries on the system had to rechecked for adherence to throttles and filters.  Finally, all throttle counters had to be reset.

This overhead has been almost completely eliminated in Teradata 14.0.  With the state change optimization feature, there is minimal impact when doing a state change.  Internal tables do not need to be re-read and the delay queue is left intact.  There is no longer a need to recheck every running query.   A simple update to the existing cache is made to reflect the state information, and the new priority scheduler configuration is downloaded.  State-transition delay queue re-evaluation has been measured to be negligible overhead. So making frequent state changes is easily supportable, should you need to provide for that. 

Even if you are not using the state matrix and are not automating the predictable changes in your processing day, you can still throttle back low priority work on the fly.  However, doing so would require manually enabling a new rule set.  This is not a good idea.  When you change a rule set, interaction with Workload Designer is required to download and activate a new rule set, and far more re-evaluations are required to existing requests, delay queues, Priority Scheduler mappings, etc.

Changing workload management behaviors by enabling an entirely new rule set is not able to take advantage of the state change optimizations in Teradata 14.0. The delay caused by enabling a new rule set on a very busy system has in extreme cases been measured in in the minutes vs. the negligible overhead of state transitions.  Remember that an Activate of a new ruleset requires the reading of the ruleset from the TDWM database and activity which may contend with an already stressed system, whereas a state change does not require the TDWM database to be accessed.

Bottom line:  Get comfortable using the state matrix to design and automate your planned and unplanned changes, and enjoy a more efficient transition from setup to setup.  And for those of you already using the state matrix, you will have a smoother experience in the face of change once you are on Teradata 14.0.

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Teradata Studio Express

$
0
0

Teradata Studio Express is an information discovery tool that retrieves data from Teradata Database systems and allows the data to be manipulated and stored on the desktop.

It is built on top of the Eclipse Rich Client Platform (RCP). This allows the product to take advantage of the RCP framework for building and deploying native GUI applications to a variety of desktop operating systems.

Presenters: Francine Grimmer, Software Engineer - Teradata Corporation
Darrick Sogabe, Product Manager - Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Application Specialist, Data Warehouse Architect/Designer, Data Warehouse Technical Specialist
Training details
Course Number: 
50591
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Recommended Dictionary Statistics to Collect in Teradata 14.0

$
0
0

Collecting statistics on data dictionary tables is an excellent way to tune long-running queries that access multi-table dictionary views.  Third party tools often access the data dictionary several times, primarily using the X views.  SAS, for example, accesses DBC views including IndicesX and TablesX for metadata discovery.  Without statistics, the optimizer may do a poor job in building plans for these complex views, some of which are composed of over 200 lines of code.

In an earlier blog posting I discussed the value of collecting statistics against data dictionary tables, and provided some suggestions about how you can use DBQL to determine which tables and which columns to include.  Go back and review that posting.  This posting is a more comprehensive list of DBC statistics that is updated to include those recommended with JDBC.

Note that the syntax I am using is the new create-index-like syntax available in Teradata 14.0.  If you are on a release prior to 14.0 you will need to rewrite the following statements so they are in the traditional collect statistics SQL.

Here are the recommendations for DBC statistics collection.  Please add a comment if I have overlooked any other useful ones.

COLLECT STATISTICS
 COLUMN TvmId
 , COLUMN UserId
 , COLUMN DatabaseId
 , COLUMN FieldId
 , COLUMN AccessRight
 , COLUMN GrantorID
 , COLUMN CreateUID
 , COLUMN (UserId ,DatabaseId)
 , COLUMN (TVMId ,DatabaseId)
 , COLUMN (TVMId ,UserId)
 , COLUMN (DatabaseId,AccessRight)
 , COLUMN (TVMId,AccessRight)
 , COLUMN (FieldId,AccessRight)
 , COLUMN (AccessRight,CreateUID)
 , COLUMN (AccessRight,GrantorID)
 , COLUMN (TVMId ,DatabaseId,UserId)
ON DBC.AccessRights;


COLLECT STATISTICS
 COLUMN DatabaseId
 , COLUMN DatabaseName
 , COLUMN DatabaseNameI
 , COLUMN OwnerName
 ,  COLUMN LastAlterUID
 , COLUMN JournalId
 , COLUMN (DatabaseName,LastAlterUID)
ON DBC.Dbase;


COLLECT STATISTICS
 COLUMN LogicalHostId
 , INDEX ( HostName )
ON DBC.Hosts;

COLLECT STATISTICS
 COLUMN OWNERID
 , COLUMN OWNEEID
 , COLUMN (OWNEEID ,OWNERID)
ON DBC.Owners;

COLLECT STATISTICS
 COLUMN ROLEID
 , COLUMN ROLENAMEI
ON DBC.Roles;


COLLECT STATISTICS
INDEX (GranteeId)
ON DBC.RoleGrants;


COLLECT STATISTICS 
COLUMN (TableId)
, COLUMN (FieldId)
, COLUMN (FieldName)
, COLUMN (FieldType)
, COLUMN (DatabaseId)
, COLUMN (CreateUID)
, COLUMN (LastAlterUID)
, COLUMN (UDTName)
, COLUMN (TableId, FieldName)
ON DBC.TVFields;


COLLECT STATISTICS
 COLUMN TVMID
 , COLUMN TVMNAME
 , COLUMN TVMNameI
 , COLUMN DATABASEID
 , COLUMN TABLEKIND
 , COLUMN CREATEUID
 , COLUMN CreatorName
 , COLUMN LASTALTERUID
 , COLUMN CommitOpt
 , COLUMN (DatabaseId, TVMName)
 , COLUMN (DATABASEID ,TVMNAMEI)
ON DBC.TVM;

 
COLLECT STATISTICS
 INDEX (TableId) 
 , COLUMN (FieldId)
 , COLUMN (IndexNumber)
 , COLUMN (IndexType)
 , COLUMN (UniqueFlag)
 , COLUMN (CreateUID)
 , COLUMN (LastAlterUID)
 , COLUMN (TableId, DatabaseId)
 , COLUMN (TableId, FieldId)
 , COLUMN (UniqueFlag, FieldId)
 , COLUMN (UniqueFlag, CreateUID)
 , COLUMN (UniqueFlag, LastAlterUID)
 , COLUMN (TableId, IndexNumber, DatabaseId)
ON DBC.Indexes;


COLLECT STATISTICS
 COLUMN (IndexNumber)
 , COLUMN (StatsType)
ON DBC.StatsTbl;


COLLECT STATISTICS
 COLUMN (ObjectId)    
 , COLUMN (FieldId)
 , COLUMN (IndexNumber)
 , COLUMN (DatabaseId, ObjectId, IndexNumber)
ON DBC.ObjectUsage;


COLLECT STATISTICS
 INDEX (FunctionID )
 , COLUMN DatabaseId
 , COLUMN ( DatabaseId ,FunctionName )
ON DBC.UDFInfo;


COLLECT STATISTICS
 COLUMN (TypeName)
 , COLUMN (TypeKind)
ON DBC.UDTInfo;

 

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Unity Director 14.00 - Routing Rules

$
0
0
Cover Image: 

Unity Director 14.00 provides user and query routing between multiple Teradata systems. This satisfies two requirements – routing users to a system that has data to satisfy their queries and re-routing users during planned or unplanned system outages. Hence, end users receive the ultimate benefit of continuous, transparent access to data. In addition, Unity Director provides high availability, disaster recovery, and data synchronization for multiple Teradata systems. For more information on Unity Director 14.00 please follow the links provided at the end of this article.

In this article, I will provide details of various routing rules that can be created using Unity Director 14.00. One can choose from various routing rule modes Auto, Preferred and Balanced. I will be explaining how reads and writes can be configured in these modes.

Given below is a short description and detailed video demonstration on each of the routing rules with examples for a Unity Director configuration managing two backend Teradata systems named ASTRO and CHILI.

Create Balanced Routing Rule:

If request is a CREATE statement: The request is submitted only to the designated system for the session. If request is successful, Unity Director's Data Dictionary is updated to reflect the existence of the new object on the system that the request was submitted to. If the request fails with a Resubmit error code then the request is submitted to the next system on the WRITE list that is in a valid state. If the request fails with an Exit error code then the client session is disconnected. The sessions to the Teradata systems are closed once all of the previous pending requests are processed by the system.

Create Balanced using Unity Director's Admin tool

 

 

 

 

Create Balanced using the Unity Director Portlet

 

 

Read Preferred Routing Rule:

If request is a read, it is sent to the system listed first in the list of preferred Teradata systems. The tables in the read request should be in valid state on the Teradata systems. If the preferred Teradata system is unavailable, next system is chosen to satisfy the request provided tables in the read request are in a valid state.

Read Preferred using Unity Director's Admin tool

 

 

Read Preferred using  the Unity Director Portlet

 

 

Read only Routing Rule:

Only read requests are allowed in a session using read only routing rule. Write requests are denied since this is a read only rule. This routing rule can be made a Ready Only with/out preferred option. If preferred option is specified, this routing rule behaves as a Read preferred one. Without a preferred option reads are sent to a system chosen by Unity Director.

Read Only using Unity Director's Admin tool

 

 

Read Only using the Unity Director Portlet

 

 

Read Auto Write Auto Routing rule:

When a user with this routing rule logs on, all systems are used for both read and write. Sessions are created on all systems. Read requests are automatically routed using the shortest queue algorithm to load balance amongst all systems. Write requests are sent to all systems.

Read Auto Write Auto using Unity Director's Admin tool

 

 

Read Auto Write Auto using the Unity Director Portlet

 

 

Create Preferred Routing Rule

If request is a CREATE statement: The request is only submitted to the first system on the WRITE list that is in a valid state. If successful the Unity Director's Data Dictionary is updated to reflect the existence of the new object on the system that the request was submitted to. If the request fails with a Resubmit error code then the request is submitted to the next system on the WRITE list that is in a valid state. If the request fails with an Exit error code then the client session is disconnected. The sessions to the Teradata systems are closed once all of the previous pending requests have been processed by the system. You can find detailed example of using this routing rule with MicroStrategy here MicroStrategy reporting using Teradata Unity Director 14.00.

Create Preferred using the Unity Director Portlet

 

 

Create Preferred using Unity Director's Admin tool

 

 

Additional Resources

Channel: 
Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Teradata Express for VMware Player

$
0
0
Channel: 
Full user details required: 
Full user details required

Download Teradata Express for VMware, a free, fully-functional Teradata database, that can be up and running on your system in minutes. For TDE 14.0, please read the launch announcement, and the user guide. For previous versions, read the general introduction to the new Teradata Express family, or learn how to install and configure Teradata Express for VMware.

There are multiple versions of Teradata Express for VMware available: for Teradata 13.0 and 13.10 and 14.0. More information on each package is available on our main Teradata Express page.

Note that in order to run this VM, you'll need to install VMware Player or VMware Server on your system. Also, please note that your system must have 64-bit support. For more details, see how to install and configure Teradata Express for VMware.

For feedback, discussion, and community support, please visit the Cloud Computing forum.

Download sizes, space requirements and MD5 checksums

Package Version Initial Disk Space Download size (bytes) MD5 checksum
Teradata Express 14.0 for VMware (4 GB) 14.00.00.01 3,105,165,305F8EFE3BBE29F3A3504B19709F791E17A
Teradata Express 14.0 for VMware (40 GB) 14.00.00.01 3,236,758,640B6C81AA693F8C3FB85CC6781A7487731
Teradata Express 14.0 for VMware (1 TB) 14.00.00.01 3,484,921,082 2D335814C61457E0A27763F187842612
Teradata Express 13.10 for VMware (1 TB) 13.10.00.10 15 GB3,002,848,12704e6cb9742f00fe8df34b56733ade533
Teradata Express 13.10 for VMware (40 GB) 13.10.00.10 10 GB2,943,708,647ab1409d8511b55448af4271271cc9c46
Teradata Express 13.0 for VMware (1 TB) 13.00.00.19 64 GB3,072,446,37591665dd69f43cf479558a606accbc4fb
Teradata Express 13.0 for VMware (40 GB) 13.00.00.19 10 GB2,018,812,0705cee084224343a01de4ae3879ada9237
Teradata Express 13.0 for VMware (40 GB, Japanese) 13.00.00.19 10 GB2,051,920,3728e024743aeed8e5de2ade0c0cd16fda9
Teradata Express 13.0 for VMware (4 GB) 13.00.00.19 10 GB2,002,207,4015a4d8754685e80738e90730a0134db9c
Teradata Tools and Utilities 13.10 Windows Client Install Package 13.10.00.00 409,823,3228e2d5b7aaf5ecc43275e9679ad9598b1

 

Ignore ancestor settings: 
Respect ancestor settings
Apply supersede status to children: 
Ignore children

New Teradata Express 14.0 Versions Available

$
0
0
Short teaser: 
Teradata launches updated versions of Teradata Express 14.0 to the latest patch level.
Cover Image: 

Teradata is announcing new Teradata Express 14.0 images that update those announced last year. These new images are at patch level 14.00.03.02 which represents the current shipping version of Teradata 14.0. This gives you access to the latest patch updates for Teradata 14.0 for your test and development needs.

Depending upon your needs and the resources available on your PC, three versions of Teradata Express14.0 are available. Please note that the resources needed for Teradata Express are in addition to those needed by the operating system on your PC:

  • TD Express 14.0 with 4GB of storage. Requires 13 GB of disk space and 2.0 GB of RAM for the Virtual Machine.
  • TD Express 14.0 with 40GB of storage. Requires 18 GB of disk space and 2.5 GB of RAM for the Virtual Machine.
  • TD Express 14.0 with 1TB of storage. Requires 35 GB of disk space and 4.0 GB of RAM for the Virtual Machine.

A 64-bit virtualization-capable PC is required.  VMware provides a utility to check your system for 64 bit support at this link.

Details on the images remain the same as the original TD 14.0 and can be found here:

http://developer.teradata.com/database/articles/teradata-express-14-0-for-vmware-now-available

Instructions for running the images remain the same and can be found here:

http://developer.teradata.com/database/articles/teradata-express-14-0-for-vmware-user-guide

The new images are available at the down load page

Please note that while the Teradata Express family of products is not officially supported, you can talk to other users and get help in the Cloud Computing forum. Note also that Japanese-language instructions for configuring TDE-V are available for download in PDF format.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Accessing Historical and Current Data with Unity Director

$
0
0
Short teaser: 
Using Unity Director to retain historical data on specific systems.
Cover Image: 

Unity Director is an extremely capable product that offers a wide variety of benefits. One of it's unique benefits is the ability to easily route users and requests to specific Teradata systems. There are many potential uses of this ability; but one use, in particular, is to selectively direct users that want access to historical data to specific systems where the data resides. 

 
Unity Director 14.10 (target release Q1 2014) will introduce full functionality for different reference depths on tables (or what is called "Table-depth"), allowing for users to define views to access different ranges of data (i.e. historical data) on particular systems.  Here's an example to put this into context - customers will be able to have three years of history on Sales table in System "A", and three months of history on the same Sales table in System "B".  Retaining this data within the same table can simplify ETL and reporting processes. As such, Unity Director will provide an easy way to retain historical data on a single (typically larger) Teradata system, while allowing for non-historical or more recent data to remain on a different or potentially smaller system.   
 

However, you don't need to wait until Unity Director 14.10 to start using this functionality. There is a way to leverage different table-depths in the current release of the product.  Here’s an example of how you can start creating system specific views today with Unity Director 14.00.

Creating the history and current data views

The follow example illustrates how to create a set of views, a currentYearView to display data from the last year only, and a historyView to display data from all the previous years. While the data supporting the currentYearView is kept on both systems (system 1 & system 2), the data for the historyView is retained only on system 2.

 

1.       The base data table, which will contain the current and historical records, is created on both systems:

create table basedata (id integer, ts timestamp);

2.       On both systems, create a current view (currentYearView) that will return only current data (in this case the rows from the last year).

create view currentYearView as
 locking basedata for access
 select * from basedata where ts between '2013-01-01 00:00:00' and '2013-12-31 23:59:59

3.       An additional table, called system2Only is also created only on system 2.This will help direct queries to the historical data on system 2:

create table system2only (id integer); -- And Empty table, with arbitrary columns

4.       Also on system 2, a history view (historyView) is created that joins to the system2Only table:

create view historyView as
 locking basedata for access
 select * from basedata
 full outer join system2only on 1=0;

5.       Using the Unity Director Dictionary Scanner, create a dictionary for the database. Be sure to include the system2Only table, and historyView on system 2. Since the historyView depends on the system2Only table, it will automatically be available only on system 2. This is important, because Unity Director 14.00 will not otherwise allow the view to be selectively managed on a specific system.

 

 

 

6.       Historical data (records for 10 years in this case) should be populated directly on system 2:

insert into basedata values (1, CAST('20030503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (2, CAST('20040503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (3, CAST('20050503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (4, CAST('20060503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (5, CAST('20070503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (6, CAST('20080503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (7, CAST('20090503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (8, CAST('2010503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (9, CAST('20110503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
insert into basedata values (10, CAST('20120503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));


select count(*) from basedata;

*** Query completed. One row found. One column returned.
*** Total elapsed time was 1 second.

   Count(*)
-----------
         10

 

7.       The data for the current year can now be populated via Unity Director,using the currentYearView:

insert into currentYearView (11, CAST('20130503111111' AS TIMESTAMP(6) FORMAT 'YYYYMMDDhhmiss'));
 *** Insert completed. One row added.
 *** Total elapsed time was 1 second.


select count(*) from currentYearView;
 *** Query completed. One row found. One column returned.
 *** Total elapsed time was 1 second.


   Count(*)
-----------
          1

 

8.       Selects to the history view will automatically route to System 2, and see the full data, because the system2only is only present on system 2:

select * from historyView order by ts;

*** Query completed. 10 rows found. 2 columns returned.
*** Total elapsed time was 1 second.

         id                          ts
-----------  --------------------------
          1  2003-05-03 11:11:11.000000
          2  2004-05-03 11:11:11.000000
          3  2005-05-03 11:11:11.000000
          4  2006-05-03 11:11:11.000000
          5  2007-05-03 11:11:11.000000
          6  2008-05-03 11:11:11.000000
          7  2009-05-03 11:11:11.000000
          9  2011-05-03 11:11:11.000000
         10  2012-05-03 11:11:11.000000
         11  2013-05-03 11:11:11.000000

Daily Modifying and Loading/Unloading of Current Data

Inserts of new current data should be done through Unity Director using the current data view (currentYearView) as shown in step 6 of the previous example. Unity Director will multicast these inserts to both Teradata systems.

 

Using the current data view reduces the risk that a user will unintentionally modify historical data that exists only on the second system, since the historical data is not available via this view:

delete currentYearView where id=2;

 *** Delete completed. No rows removed.
 *** Total elapsed time was 1 second.

Updates or deletes of current data can also be executed via Unity Director, similarly using the current data view to prevent touching historical data.

delete currentYearView where id=11;

 *** Delete completed. One row removed.
 *** Total elapsed time was 1 second.

In the event a user attempts to modify historical records through Unity Director that only exist in the second system, Unity Director will detect this as a data inconsistency and make one of the copies of the table offline.

Consider this ill-advised update to the base data table from a rogue user, accessing the basedata table directly:

update basedata set id=id;

 *** Update completed. One row changed.
 *** Total elapsed time was 1 second.

Since this update to the base data table affected 1 row on system 1, and 10 rows on system two, Unity Director detects the data inconsistency and takes one copy of the table offline, generating the following alert (some text removed for brevity - note the differing row counts):

Alert No.          : 99
Alert Code         : 40054
Alert Description: An inconsistent response was detected from a Teradata system.  As a result the object has been placed in the Unrecoverable state.
Alert Details      : System 2 (db2) sent an inconsistent response, table dbtest.basedata is unrecoverable.

Response mismatch: session 1000, txn 376, request 13, log 11421536
…
        SQL: 'update basedata set id=id;'…
Response 1 from system 1 (db1): row count 1, hash 3010, mesh status 0, DRCP code 0 [sent to client]
…
Response 2 from system 2 (db2): row count 10, hash 30a0, mesh status 0, DRCP code 0
…
System 2 (db2) sent an inconsistent response, table dbtest.basedata is unrecoverable.

Which copy of the table (system 1 with only current data, or system 2 with the historical data) is taken off-line is not-deterministic. Since Unity Director is designed to protect the user against these data inconsistencies, the rogue user sees only 1 row affected, since system 1 was the first to respond in this case.

Unloading, Archiving or Modifying Historical Data

Since the historical data cannot be modified via Unity Director or Loader, ETL processes that modify that data must access the data directly on the second system. As with any process directly accessing a Unity Director managed table on a single Teradata system, these direct ETL processes must be managed appropriately to not interfere with any work load accessing the base table via Unity Director or Unity Loader. The base table should be halted in Unity Director before attempting to load or unload any historical data, and then recovered to return it to active service when the ETL process is complete.

unityadmin> object halt dbtest.basedata on db2;

The request is currently processing as operation number 25.
You may check its status using the command 'operation check 25'.

unityadmin> operation check 25;

Operation Number : 25
Operation Name   : Halting Table
User             : admin
User Name        : Main Administration User
Progress (%)     : 100
Status           : Finished (1)
Start Time       : 05/14 07:13:17
Finish Time      : 05/14 07:13:18
Systems:

    [2] db2 - Finished (1)

Updates:

    05/14 07:13:18 [-] Info: Halting table dbtest.basedata
    05/14 07:13:18 [-] Info: Requesting mgmt X lock on 'dbtest.basedata'
    05/14 07:13:18 [-] Info: Mgmt X lock on 'dbtest.basedata' granted
    05/14 07:13:18 [-] Info: Releasing mgmt X lock on 'dbtest.basedata'
    05/14 07:13:18 [-] Info: Successfully halted table on 1 systems
    05/14 07:13:18 [-] Info: Operation finished

unityadmin>

It's important to note that during this operation, the historical data will be unavailable to client applications via Unity Director. Clients attempting to read from the historical view will automatically hold until the table is returned to service.

 

BTEQ -- Enter your SQL request or BTEQ command:

select * from historyView order by ts;

…client application waits until the base table is returned to service, since it requires both the system2Only table, and the basedata table.

Once the base table is recovered:

unityadmin> object recover  dbtest.basedata on db2;

The request is currently processing as operation number 26.

…then the client request completes:

select * from historyView order by ts;

 *** Query completed. 10 rows found. 2 columns returned.
 *** Total elapsed time was 3 minutes and 49 seconds.

         id                          ts
-----------  --------------------------
          1  2003-05-03 14:11:11.000000
          2  2004-05-03 14:11:11.000000
          3  2005-05-03 14:11:11.000000
          4  2006-05-03 14:11:11.000000
          5  2007-05-03 14:11:11.000000
          6  2008-05-03 14:11:11.000000
          7  2009-05-03 14:11:11.000000
          9  2011-05-03 14:11:11.000000
         10  2012-05-03 14:11:11.000000
         11  2013-05-03 14:11:11.000000

 BTEQ -- Enter your SQL request or BTEQ command:

 

Availability Considerations

As shown in the previous example, if the base data table is unavailable for any reason on the second system, the historical data will be unavailable. If the table is out-of-service or standby, any reads against the historyView will hold until it is returned to server.

 

Should the base data table become unrecoverable on the second system, then any queries against the historyView will fail with the following error:

select * from historyView order by ts;

 *** Failure 4510 No systems available
 *** Total elapsed time was 1 second.

Unity Director will also generate an alert when this occurs:

----------------------------------------------
Alert No.          : 98
Alert Code         : 40044
Alert Description  : There were no systems available to complete the request.
Alert Details      : No target found for session 1000
Alert Category     : Database Operations
Resource Type      : System
Resource ID        : u14s2
Alert Severity     : Critical
Alert State        : Opened
Repeated           : 0
Raised Time        : 05/14 07:30:29

Future Changes in Unity Director 14.10

The use of this system2Only table is only a temporary work around, necessary for the 14.00 version of Unity Director. Unity Director 14.10 will introduce the ability to select which systems views are managed on, eliminating the need for this table. This will eliminate the need to add the locking modifier for the historyView:

create view historyView as
 locking basedata for access
 select * from basedata

 

Instead, the Unity Director's Data Dictionary will explicitly allow the view to be managed on specific systems, without the dependency on the system2Only table:

 

 

Additional Resources

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Teradata OLE DB Provider

$
0
0
Channel: 
Full user details required: 
Full user details not required

The Teradata OLE DB Provider allows you to connect to the Teradata database using a Microsoft OLE DB interface.

To begin, see the README files.

Ignore ancestor settings: 
Respect ancestor settings
Apply supersede status to children: 
Ignore children

Aster Analytics Beta

$
0
0
Channel: 
Full user details required: 
Full user details not required

This package contains the Aster Analytics Beta Functions

Ignore ancestor settings: 
Respect ancestor settings
Apply supersede status to children: 
Ignore children

Big Data - Big Changes

$
0
0

Behind the Big Data hype and buzzword storm are some fundamental additions to the analytic landscape.

What new tools do you need to deal with the new storm of data ripping up your IT infrastructure? What do these new tools do and what are they not good at? How do you choose among these tools to solve the business challenges your company is facing? And how do you tie all the tools together to make them work as an overall analytic ecosystem?

Presenter: Todd Walter, Chief Technologist – Teradata Corporation

Audience: 
Database Administrator, Designer/Architect
Training details
Course Number: 
50782
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
2
Channel: 
Viewing all 780 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>