Quantcast
Channel: Teradata Downloads
Viewing all 780 articles
Browse latest View live

Unity Director 14.00 & Unity Loader 14.00 now available

$
0
0
Short teaser: 
Unity Director and Unity Loader 14.00 are now available to customers.
Cover Image: 

Unity Director 14.00 and Unity Loader 14.00 are now available!  Last year Teradata announced a new query management and data synchronization product to the marketplace.  Now we've taken it to the next level.  Unity Director 14.00 provides a new user interface, new and improved routing rules and scalable configuration options.  Unity Loader 14.00 is a new offering extending beyond the basic synchronization of SQL updates and now enabling the intelligent synchronization of high volume/bulk loads. 

Leveraging Teradata's patented SQL Multicast technology, Unity Director and Unity Loader continue to change the paradigm of query management and data synchronization across multiple Teradata systems. Here are a few highlights of the 14.00 releases.  

Unity Director 14.00

Unity Director is focused on query management and SQL-based data synchronization across multiple Teradata systems.  For query management, Unity Director enables not just query failover but also intelligent query routing based on the SQL object being accessed.  Users can choose from several routing modes: Named, Preferred, or Auto Routing.  Director 14.00 now provides the ability to designate a given system for specific CREATE operations, allowing for greater control of certain workloads - great for in-database transformations or BI tools that leverage temporary tables.  Unity Director 14.00 also made some significant improvements with respect to authentication, now supporting LDAP, Kerberos and SPENAGO authentication protocols.   

Unity Director 14.00 provides greater scalability as well.  Not only does it support a greater number of database objects and concurrent sessions, but it also introduces a new Teradata Managed Server for increased recovery log space for both SQL-based and Bulk Load (Unity Loader) operations.  The new Teradata Managed Sever itself provides new levels of scalability in that customers can now configure multiple servers in a single data center based on their data synchronization and load volume requirements.  

Finally, Unity Director 14.00 introduces a brand new user interface.  To be more consistent with other applications in the Teradata Analytical Ecosystem, Unity Director has replaced its original UI with a Viewpoint portlet.  The new user interface allows for greater flexibility when configuring systems under Director control and defining specific routing  rules.  

Unity Loader 14.00

In the past, synchronizing large volumes/bulk data to two or more Teradata systems required a custom-built dual-load engagement.  With the introduction of Unity Loader 14.00, custom dual-load solutions are no longer needed.  Unity Loader is a powerful offering providing intelligent and selective routing of bulk loads to the appropriate Teradata system.

Like Director, Loader applies Teradata’s patented SQL Multicast technology - yet Unity Loader goes beyond SQL-based operations and enables the Teradata Parallel Transporter bulk load utilities.  This allows customers to direct their  Teradata Parallel Transporter load jobs at Unity Loader, and Loader will intelligently deliver the updates to one or more systems based on where the data object exist.  Specifically, Unity Loader 14.00 support the bulk load utilities of TPT Load and JDBC Fastload.  Future releases will expand the support for additional bulk load utilities.  
 
Unity Loader is tightly integrated with Unity Director to help coordinate writes and ensure appropriate sequencing of bulk load operations.  Unity Loader also shares the same Teradata Managed Server with Unity Director and does not require any addition data center footprint.
 
The combination of Unity Director and Unity Loader allows customers to seamlessly and intelligently managed queries, synchronize SQL-based and bulk loads, and provide transparency in the event of system outages.  All of this while using a common, easy-to-use interface to add additional Teradata systems to the Analytical Ecosystem with minimal effort.

Additional Resources

For more information on Unity Director 14.00 and Unity Loader 14.00, follow the links below. 

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Teradata CLIv2 for Linux

$
0
0
Tags: 
Channel: 
Full user details required: 
Full user details not required

Teradata Call-Level Interface Version 2 is a collection of callable service routines that provide the interface between applications and the Teradata Gateway. Gateway is the interface between CLI and the Teradata Database. This download package is for the Linux platform.

Download packages
Ignore ancestor settings: 
Respect ancestor settings
Apply supersede status to children: 
Ignore children

Teradata CLIv2 for Windows

$
0
0
Tags: 
Channel: 
Full user details required: 
Full user details not required

Teradata Call-Level Interface Version 2 is a collection of callable service routines that provide the interface between applications and the Teradata Gateway. Gateway is the interface between CLI and the Teradata Database. This download package is for the Windows platform.

Download packages
Ignore ancestor settings: 
Respect ancestor settings
Apply supersede status to children: 
Ignore children

MicroStrategy reporting using Teradata Unity Director 14.00

$
0
0
Additional contributors: 
Cover Image: 

Unity Director handles data for mission critical applications that demand continuous high availability (HA), data location identification, session management, and data synchronization. It simplifies multiple Teradata Database system management tasks including routing users to the data location, rerouting users during an outage, and synchronizing database changes across Teradata Database systems. Unity Director integrates with Unity Ecosystem Manager to enable administrators to view system and table health, receive warning and critical alerts, and view analytics and statistics about data operations. As a bulk load option, Unity Director can utilize Unity Loader to selectively route large data loads from a client to Teradata systems without additional administrator operations.

In this article, I will be showing you how to establish Unity Director 14.00 routing rules which will enable you to efficiently load balance reports across multiple managed Teradata systems.

Routing Rules and User Mappings

A routing rule defined in Unity Director is used along with user mappings to enable administrators to efficiently manage client connections and route them to the desired Teradata managed system(s).  You can define read-only or read/write routing rules.   I will demonstrate how to build create preferred routing rules via Unity Director’s graphical user interface and command line utility.

Creation of Routing rule and User Mappings via Unity Director portlets

Creation of Routing rule create via Unity Director command line

MicroStrategy’s use of the create preferred routing rules

When MicroStrategy reporting utilizes temp table creation capabilities Unity 13.10 was required to send .ddl and write requests to all managed Teradata systems.  This resulted in resources being consumed on both systems.  In Unity Director 14.00, create preferred routing rules were added to specifically reduce resource consumption.  Now when temporary tables are needed the Unity Administrator can establish create preferred routing rules and user mappings that will direct the .ddl and read/writes to go to one system.  In the below video you will see this capability.

MircoStrategy with create preferred routing rule

Configuring MircoStrategy to use Unity Director 14.0

Additonal Resources

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Unity Director Initial Configuration and Setup

$
0
0
Short teaser: 
This study will demonstrate how Unity Director 14.00 User Interface handles the initial configuratio
Additional contributors: 

This study will demonstrate how Unity Director 14.00 User Interface handles the initial configuration and setup activities users will face when starting to work with this product.

Terms­

  • Unity Director - Viewpoint portlet, available via Unity menu, used for Unity Director monitoring and management, consists of 5 views represented as tabs: Dashboard, Table Health, Sessions, Alerts, and Operations.
  • Director Dashboard tab consists of 4 views representing high level user objectives when monitoring Unity:  Data synchronization tab – shows system states and load progress, Components tab – shows physical architecture, Performance  tab – shows key Unity performance metrics and Blocked sessions tab - shows blocked user sessions chart
  • Director Configuration - Viewpoint portlet used for configuration, consists of 4 views: Data Dictionary, Session Routing, Thresholds and Global Settings
  • Data Dictionary (DD) - a set of database objects managed by Unity Director

Managed systems - Teradata Database systems managed by Unity Director  

Environment

Within an EDW user has created a data loading application that needs to be made highly available via Unity Director. There are two sites (local and remote) that host servers connected in HA pair mode:  2 Unity Director servers, 2 managed database systems and monitored by 2 clustered Viewpoint systems.

Configuration and initial steps use case

Purpose:  User wants to review initial configuration and start using Unity Director software to manage the data loading application.

Assumptions:  Unity Director is already installed in the user environment. Unity Director is sold as staged software installed on Teradata Managed server.  Teradata systems Managed by Director existed prior to installing Unity Director and their network configuration was added to Unity Director servers host files.

Step 1:

Review default Unity Director Configuration – look for initial installation architecture, make sure that everything is configured correctly and Unity Director System is ready to start being used.

Start by examining Unity Director Components architecture on Dashboard Components tab – this view shows that users connect to Unity Director TDPID and the requests are routed to database systems. It also presents state and details popup windows for all Unity Director Components (sequencer, dispatcher, watchdog, endpoints and repository) for each site.

View default global settings in Configuration Portlet

  1. Alerts configuration – shows which alerts are enabled
  2. Purging strategy – shows how often historical data is deleted
  3. General configuration parameters
  4. Thresholds – optional view for setting recovery log, bulk load and blocked sessions alerts

Step 2:

Create Data Dictionary for tables representing user application using Unity Director Configuration portlet.

1. Navigate to DD tab and click on plus icon:

2. Click on the newly created dictionary, give it a name,  go through 4 steps within the wizard either using step buttons (from 1 to 4) or navigating via ‘back’ and ‘next’ buttons:

a. Scan databases to discover objects on the systems:

b. Select databases used in the application:

c. Scan objects within the selected databases:

d. Select objects that comprise the application:

3. Deploy Data Dictionary:

4. Observe application’s tables in the Deployed Dictionary area:  

Please note that users can have multiple dictionaries. Objects are not shared by multiple dictionaries. Deployed Dictionary is a collection of all objects contained in deployed dictionaries.  Also if user kept the ‘Auto-select on object scan’ global setting the application will automatically select objects that are identical across Teradata systems.

5. Dealing with errors:  

In some cases due to different reasons user has to perform an analysis of errors related to DD objects. Not all errors prevent user from deploying a DD. A typical critical error is related to object definition mismatch across multiple systems. DD tool allows user to view errors and force rescan of objects in question.

6. Dealing with locks:

Unity Director performs its own lock management. It reads locking information during scan and uses this knowledge to process user requests. Users can alter default locking behavior for object that are equal across systems. 

7. Exporting and importing:

In some cases, for example when moving from one version to another or when migrating from a different environment user needs to save a DD in the temporary location and put it back. DD tool provides export and import functionality. For importing it analyses which dictionary objects are being imported and allows mapping objects to other existing dictionaries.  

Step 3:

Activate Teradata systems on Components view if they are not already active

Step 4:

Review and set routing rules

  1. May be optional if user prefers the default automatic routing rules
  2. To avoid requests from a specific user being routed to the remote system
  3. To satisfy different user behaviors required by application such as read only, create etc.

Step 5: Start application load scripts and begin monitoring Unity Director

In this last step user may go to command line and start a BTEQ or TPT script that would be loading data into managed Teradata systems via Unity Director. After the script has started go to Dashboard Data Synchronization view to check if tables are active. Then go to Sessions view to see sessions created by the load script.

Conclusion

After completing those steps users are ready to start using Unity Director Viewpoint User Interface to monitor and manage their data loads. This study shows how simple it is to get an application up and running with new Unity Director 14.00 product.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

TASM Implementation Tips and Techniques

$
0
0

Defining workloads in TDWM may qualify as a TASM implementation but is just a small piece of the puzzle.  This presentation will look at this and all the other pieces needed to complete the picture.

This presentation will provide you with best practices based on more than 100 TASM implementations.  Learn how to understand  workloads and determine the impact of changes.  Examine how to improve tactical query performance and keep your system from AWT saturation and flow control.  If you’re setting up TASM for the first time, or just want to improve the scheme you have now, then this presentation is for you.

Note: This was a 2012 Teradata Partners Conference session.

Presenters:
Greg Hamilton - Teradata Corporation
Srini Gorella - Teradata Corporation
 

 

Audience: 
Data Warehouse Administrator, Data Warehouse Architect/Designer, Data Warehouse Technical Specialist
Training details
Course Number: 
50492
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Entering and Exiting a TASM State

$
0
0

In earlier postings I’ve described how TASM system events can detect such things as AMP worker task shortages, and automatically react to change workload managements settings.   These system events tell TASM to look out for and gracefully react to resource shortages without your direct intervention, by doing things like adjusting throttle limits temporarily downwards for the less critical work.

This switchover happens as a result of TASM moving you from one Health Condition to another, and as a result, from one state to another state.  But how does this shift to a new actually state happen?  And under what conditions you will you be moved back to the previous state?

 

Timers that control the movement between states

Before you use system events to move you between states, you will need to familiarize yourself with a few timing parameters within TASM, including the following:

  • System-wide Event Interval - The time between asynchronous checks for event occurrences.  It can be set to 5, 10, 30 or 60 seconds, with 60 seconds being the default
  • Event-specific Qualification Time – To ensure the condition is persistent, this is the amount of time the event conditions must be sustained in order to trigger the event
  • Heath Condition Minimum Duration – Prevents a constant flip-flop between states,  the minimum amount of time that a health condition (which points to the new state) must be maintained even when the conditions that triggered the change are no longer present.

 

Entering a State

Let’s assume you have defined an AWT Available event that triggers when you have only two AMP worker tasks available on any five of your 400 AMPs, with a Qualification Time of 180 seconds.   Assume that you have defined the Health Condition associated with the state to have a Minimum Duration of 10 minutes, representing the time that must pass before the system can move back to the original state. 

TASM takes samples of database metrics in real-time, looking to see if any event thresholds have been met.  It performs this sampling at the frequency of the event interval. 

Once a sampling interval discovers the system is at the minimum AMP worker task level defined in the event, a timer is started.   No state change takes place yet.  The timer continues on as long as each subsequent sample meets the event’s thresholds.  If a new event sample shows that the event thresholds are no longer being met, then the timer will start all over again with the next sample that meets the event’s threshold criteria.

Only when the timer reaches the Qualification Time (180 seconds) will the event be triggered, assuming that all samples along the way have met the event’s threshold.  At that point TASM moves to the new state. 

 

Exiting a State

Returning to the original state follows a somewhat similar pattern.

The Minimum Duration DOES NOT determine how long you will remain in the new state, but rather it establishes the minimum time that TASM is required to keep you in the new state before reverting back to the original state.  

So when will you exit a state?

Event interval sampling continues at the event interval number of seconds all the while you are in the new state.  Even if the event threshold criteria is no longer being met and available AWTs are detected to be above the available threshold, once the move to the new state has taken place, the new state remains in control for the Minimum Duration.

After the Minimum Duration number of seconds has been passed, if event sampling continues to show that the AWT thresholds are being met (you still have at least five AMPs with only two AWTs available), TASM will continue to stay in that new state.  Only after the first sample that fails to meet the event thresholds is experienced (once the Minimum Duration number of seconds has passed) will control be moved back to the original state.

The bottom line is that you will not return to the original state until the Minimum Duration time of the state's Health Condition has been passed, but you may not be returned then if the condition that triggered the event persists.

Ignore ancestor settings: 
0
Apply supersede status to children: 
0

Teradata Performance: Why DBQL Matters the Most

$
0
0

When it comes to Teradata Performance, DBQL matters the most!

DBQL tables are the number one source of data to support application performance and workload management analysis. DBQL data helps identify problem queries and helps discern what makes them suspect. In this presentation you will learn how to identify opportunities to significantly decrease batch processing time and enhance query performance. Also, discover how to validate savings in CPU, IO and time when doing performance tuning. And then uncover how DBQL data can help you adjust workload mix, from identifying problem areas for SLAs to identifying queries classified in the wrong work load. If Performance matters most, you won’t want to miss this presentation!

Note: This was a 2011 Partners Conference Presentation, updated in April 2013.

Presenters:
Barbara Christjohn - Teradata Corporation
Wendy Hwang - Teradata Corporation

Audience: 
DW Technical Specialists, DBAs, DW Application Specialists
Training details
Course Number: 
47222
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Teradata OLAP Connector (TOC) 14.00 Maintenance Release Now Available

$
0
0
Short teaser: 
Teradata OLAP Connector (TOC) 14.00 now includes pushdown calculations
Cover Image: 

Teradata OLAP Connector (TOC), Schema Workbench (TSW) and Aggregate Designer (TAD) 14.00 Maintenance releases have now been posted for download.  In addition to several key fixes, this maintenance release includes a new TOC feature that allows function, method, and operator calculations to be applied in the underlying Teradata Database rather than on the client system. These calculations are known as pushdown calculations.

The Pushdown system is an evaluation method that allows the entire dataset returned by an MDX query to be evaluated all at once, instead of the traditional Cell-by-Cell (Context) evaluation.

The Functions, Methods, and Operators will fall under two categories: those that are eligible for Pushdown on their own, and those that are only eligible for Pushdown when found under another function eligible for Pushdown. Currently all of the operators eligible for Pushdown fall under the former category, and all of the methods eligible for Pushdown fall under the latter category.

Operators

The functions and operators listed below are eligible for Pushdown and can be the calculated measure expression's outermost function, as they return values. These functions include:

  • Arithmetic operations: +, -, *, /, abs
  • Comparison operations: >, <, =, >=, <=
  • Simple Aggregate functions: SUM, AVG, MAX, MIN, COUNT,DISTINCTCOUNT
  • IIF

Methods

The functions and methods listed below are also eligible for Pushdown; however, since these functions and methods do not return Values, they are only eligible for Pushdown when found as parameters of other functions eligible for Pushdown. The functions and methods eligible for Pushdown only when specified as parameters to other functions eligible for Pushdown include:

  • Descendants
  • Filter
  • PeriodsToDate
  • YTD
  • QTD
  • MTD
  • WTD
  • ParallelPeriod
  • OpeningPeriod
  • ClosingPeriod
  • CurrentMember
  • PrevMember
  • Level

For more information on pushdown, follow the link below

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Using DBQL Object Data for Targeting MLPPI and Columnar Candidacy

$
0
0

Have you ever struggled over whether, or how, to use a new database feature?  In this presentation, we’ll demonstrate how database query log (DBQL) data can help when it comes to determining candidates for MLPPI (Multi-Level Partitioned Primary Index) as well as the new Columnar Partitioning feature of Teradata 14.

This presentation will discuss using DBQL object data for gathering such key information as frequency of use, large scan, and total query counts, as well as CPU and I/O usage.  In addition, the presentation will review techniques for testing and validating conclusions, instilling confidence in your decisions.  Finally, learn how the wealth of information available from DBQL can make it easier for you to benefit from the performance advantages that come with features like MLPPI and Columnar.

Note: This was a 2012 Teradata Partners Conference session.

Teradata Release information: TD 13 & 14

Presenters:
Barbara Christjohn, Performance COE – Teradata Corporation
David Hensel, Performance COE – Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Architect/Designer, Data Warehouse Technical Specialist
Training details
Course Number: 
50495
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Hadoop Smart Loader

$
0
0
Short teaser: 
Hadoop Smart Loader gives users the ability to load data between Hadoop and Teradata.
Cover Image: 

Teradata Studio 14.02 provides a Smart Loader for Hadoop feature that allows users to transfer data from Teradata to Hadoop and Hadoop to Teradata. The Hadoop Smart Loader uses the Teradata Connector for Hadoop MapReduce Java classes as the underlying technology for data movement. It requires the HCatalog metadata layer to import Hadoop objects.  Currently, the Smart Loader for Hadoop feature in Teradata Studio 14.02 is certified to use the Teradata Connector for Hadoop version 1.0.6, and the Hortonworks distribution of Hadoop.  Since Cloudera CDH 4.2 now has HCatalog, testing will be done soon to certify Smart Loader for Hadoop with Cloudera.

                  

 

With bi-directional data loading, users can easily perform ad hoc data transfer between their Teradata and Hadoop systems. The Hadoop Smart Loader can be invoked by drag and drop of a table between the two systems or from a menu option to Import from Hadoop or Export to Hadoop.

Enable Hadoop Smart Loader

To use the Hadoop Smart Loader you must first enable the Hadoop Transfer Perspective within Teradata Studio. From the Windows>Preferences toolbar option, open the Data Transfer Preferences page and check the 'Enable Hadoop Views' checkbox.

Then click the Open Perspective button in the upper toolbar and select Other.... Next select Hadoop Transfer and click OK.

                     

This will open the Hadoop Transfer perspective, providing the Hadoop View, Transfer Progress View, and Transfer History View in your Teradata Studio display.

Create Hadoop Connection Profile

Now you are ready to create a Hadoop connection profile and transfer data. Click the 'Add a Hadoop profile' button [] in the Hadoop View to invoke the Hadoop Profile dialog. Enter the name for your Hadoop connection profile, the HCatalog hostname, and the system username and password.

                      

The Hadoop View will connect to your Hadoop system and display the list of Hadoop schemas. Open the Tables folder to view the list of tables. Right click on a table and select the Table Properties option to see the properties for a Hadoop table, such as location, file size, file type and column names and types. Drag a table from Hadoop and drop in on your Teradata Database or invoke the Import and Export wizards from the Data Source Explorer.

Import a Table from Hadoop

You can invoke the Import from Hadoop wizard from the Teradata Studio Data Source Explorer. Connect to your Teradata database and locate the Tables folder for the database you want to import into. Right click and choose the Teradata>Import from Hadoop... menu option.

Select the Hadoop connection profile, database, and table you want to import and click Next.

                  

The next screen allows you to edit the table name, 'No Primary Index' option, and column data types.

                  

NOTE: columns that are defined as Strings in Hadoop are given the Teradata column type of VARCHAR with a default length of 2048. You should edit these columns to provide a more appropriate size for the VARCHAR column. Click the elipse button to edit the column type.

                   

Click OK to create the table in your Teradata Database and start the data transfer from Hadoop.

                    

Export a Teradata Table to Hadoop

You can invoke the Export To Hadoop wizard from the Teradata Studio Data Source Explorer. Connect to your Teradata Database and locate the table you want to export to the Hadoop system. Right click and choose the Data>Export To Hadoop... option.

            


Choose the Hadoop connection profile and database to export the table to and click Next.

                 

Verify the column types created for the Hadoop table. You can choose between RC and Text file transfers.  Click Finish to perform the transfer.

                 

Hadoop Data Transfer Job

A transfer job is created to transfer the data to and from Teradata and Hadoop. You can view the progress of the transfer job in the Transfer Progress View of the Hadoop perspective. Once the job is complete, an entry is placed in the Transfer History and displayed in the Transfer History View.

Select the entry in the Transfer History and click on the Show Job Output toolbar button to view the output from the Hadoop job transfer.

Help

Teradata Studio provides Help information. Click on Help>Help Contents in the main toolbar.

     

Conclusion

Teradata Studio Hadoop Smart Loader provides an ad hoc data movement tool to transfer data between Teradata and Hadoop. It provides a point and click GUI where no scripting is required. You can download Teradata Studio on the Teradata Download site. For more information about other Teradata Studio features, refer to the article called Teradata Studio.

 

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

In-Database Analytics and Physical Database Design

$
0
0

It's well-known that the application of database optimization techniques can improve query performance.

However these techniques are often overlooked when it comes to analytical queries (especially those in-database) as those queries are already significantly faster than analysts are used to. However the application of techniques, such as AJIs, MVC, MLPPI, soft RI and columnar partitioning, can all lead to further improvements in processing time. This talk will explain those techniques and show how they can be applied to analytical data sets to increase the productivity of the analysts. Examples from the field will be presented detailing how they benefited the analytic process.

Note: This was a 2012 Teradata Partners Conference session.

Presenter: Paul Segal, PS Consultant – Teradata Corporation

Audience: 
Data Warehouse Analytical Modeler, Data Warehouse Business Users
Training details
Course Number: 
50481
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
1
Channel: 

Teradata Connector for Hadoop (Studio Edition)

Teradata Connector for Hadoop (Command Line Edition)

Teradata Connector for Hadoop (Sqoop Integration Edition)


Teradata Database Architecture Overview

$
0
0

The Teradata Database system is different from other databases in a number of fundamental ways.

If you want to know what these differences are and how they make it possible for Teradata to deliver unlimited scalability in every dimension, high performance, simple management and all the other requirements of an Active Data Warehouse, then this is the presentation for you. We will discuss how the architecture of Teradata enables you to quickly, efficiently and flexibly deliver value to your business.

Key Points:

  • Teradata's key differentiators
  • What the differences mean to the IT staff
  • How the differences result in value to the business

Note: This was a 2012 Teradata Partners Conference session.

Presenter: Todd Walter, Distinguished Fellow – Teradata Corporation

Audience: 
Data Warehouse Administrator, Data Warehouse Architect/Designer, Data Warehouse Program Manager
Training details
Course Number: 
50480
Training Format: 
Recorded webcast
Price: 
$195
Credit Hours: 
3
Channel: 

MultiLoad Notify Exit Routine issues in 14.00 and 14.10

$
0
0

The Teradata MultiLoad Notify Exit Routine has some issues in the 14.00 and 14.10 releases because of enhancements to the MultiLoad notify events logic. This document discusses the enhancements and issues in detail.

Notify Event Enhancements

MultiLoad Delete Initialize Event Enhancement

In the initial Teradata Multi-System Manager (TMSM)/MultiLoad (MLOAD) integration, no data was passed to the MultiLoad Delete Initialize Event. TMSM requested that the same data sent to the MLOAD Initialize Event be passed to the MultiLoad Delete Initialize Event.

This enhancement was implemented in the 13.10 release. The data passed to the MultiLoad Delete Initialize Event is as follows:

  • Version ID length—4-byte unsigned integer
  • Version ID string—32-byte (maximum) array
  • Utility ID—4-byte unsigned integer
  • Utility name length—4-byte unsigned integer
  • Utility name string—32- byte (maximum) array
  • User name length—4-byte unsigned integer
  • User name string—64- byte (maximum) array
  • Optional string length—4-byte unsigned integer
  • Optional string—80- byte (maximum) array

MultiLoad Phase 1 Begin Event Enhancement

A customer requested that the Database Name be passed to the MultiLoad Phase 1 Begin Event.  When this enhancement was implemented in the 14.00 release, the development team added the Database Name to the beginning of the Phase 1 Begin structure, not the end of it.

See the code snippet below:

struct {
	char     DBaseName[MAXDBASENAMELEN];
	UInt32 TableNameLen;
	char     TableName[MAXTABLENAMELEN];
	UInt32 dummy;
} PhaseIBegin;

This made the Notify Exit facility for MultiLoad 14.00 and 14.10 non-functional against the Notify Exit Routine written in pre-14.00 releases because the field mappings changed. A bug (SA-29733) was opened to have the Database Name placed at the end of the structure.  SA-29733 was also used to back out some Notify source code to support 8-byte activity counts. SA-29733 was implemented in MultiLoad eFixes 14.00.00.009 and 14.10.00.003.

Now the data passed to MultiLoad Phase 1 Begin Event is:

  • Table name length—4-byte unsigned integer
  • Table name—128-byte (maximum) array
  • Table number—4-byte unsigned integer
  • Database name—62-byte (maximum) array

MultiLoad Notify Issues in the 14.10 GCA Release

The following Notify logic issues exist in the 14.10.00.000 release:

  • Usage of both MLNotifyExitParm* and MLNotifyExitParm64 to notify 4 and 8 byte activity count
  • Two functions MLNotify() and MLNotifyEon() to process Notify events
  • Two Notify structure pointers
#ifdef __MVS__
   if(info->ModErr = modcall(p,p64))/*DR135262 DR147533*/
#else
   if (info->ModErr = theExit(p,p64)) /* DR147533 */
#endif
  • Two MultiLoad samples mlnotf.c and mlnotfeon.c
    • Since no install change was done to include mlnotfeon.c, the MultiLoad Notify feature was broken if the script instrumented either the Extended Object Name (EON) logic or the 8-byte activity count logic.
  • The activity counts are not correct when using either the old sample mlnotf.c or new sample mlnotfeon.c.

JIRA Bug SA-29992 was created to track these bugs. SA-29992 was implemented in the 14.10.00.004 MultiLoad eFix.; NOTE: Some code from other JIRA Issues was backed out to fix the problems. The code changes were implemented in the following areas:

  • Introduced new Notify events:
NMEventInitializeEON  = 30, /* SA-29992 */
NMEventPhaseIBeginEON  = 31,/* SA-29992 */
NMEventCheckPoint64 = 32,/* SA-29992 */
NMEventPhaseIIEnd64  = 33, /* SA-29992 */
NMEventErrorTableI64  = 34, /* SA-29992 */
NMEventErrorTableII64  = 35, /* SA-29992 */
NMEventDeleteInitEON  = 36, /* SA-29992 */
NMEventDeleteBeginEON = 37, /* SA-29992 */
NMEventDeleteEnd64  = 38 /* SA-29992 */
  • Introduced new structures:

PhaseIEnd64

ImportEnd64

InitializeEON

PhaseIBeginEON

CheckPoint64

PhaseIIEnd64

ErrorTableI64

ErrorTableII64

DeleteInitEON

DeleteBeginEON

DeleteEnd64

  • Added new event handling in mldnotfy.c:

NMEventInitializeEON

NMEventPhaseIBeginEON

NMEventCheckPoint64

NMEventPhaseIIEnd64

NMEventErrorTableI64

NMEventErrorTableII64

NMEventDeleteInitEON

NMEventDeleteBeginEON

NMEventDeleteEnd64

  • In mldcli.c, mldexec.c and mldstmts.c, when EXITEON or EXIT64 is specified, the new events are notified. Otherwise, the old events are notified.
  • Applied SA-29733(Add database name enhancement) to the MLOAD DELETE task.

Sample MultiLoad Notify Exit Routines

For 14.00

/**********************************************************************/
/*                                                                    */
/* mlnotf.c   - Sample Notify Exit for MulitLoad.                     */
/*                                                                    */
/* Copyright 1998-2013 by Teradata Corporation.                       */
/* ALL RIGHTS RESERVED.                                               */
/* TERADATA CONFIDENTIAL AND TRADE SECRET                             */
/*                                                                    */
/* Purpose    - This is a sample notify exit for MultiLoad.           */
/*                                                                    */
/* Execute    - Build Notify on a Unix system                         */
/*                compile and link into shared object                 */
/*                    cc -G mlnotf.c -o mlnotf.so                     */
/*                                                                    */
/*            - Build Notify on a Win32 system                        */
/*                compile and link into dynamic link library          */
/*                    cl /DWIN32 /LD mlnotf.c                         */
/*                                                                    */
/**********************************************************************/

#include <stdio.h>
typedef unsigned long UInt32;
typedef enum {
   NMEventInitialize     =  0,
   NMEventFileInmodOpen  =  1,
   NMEventPhaseIBegin    =  2,
   NMEventCheckPoint     =  3,
   NMEventPhaseIEnd      =  4,
   NMEventPhaseIIBegin   =  5,
   NMEventPhaseIIEnd     =  6,
   NMEventErrorTableI    =  7,
   NMEventErrorTableII   =  8,
   NMEventDBSRestart     =  9,
   NMEventCLIError       = 10,
   NMEventDBSError       = 11,
   NMEventExit           = 12,
   NMEventAmpsDown       = 21,
   NMEventImportBegin    = 22,
   NMEventImportEnd      = 23,
   NMEventDeleteInit     = 24,
   NMEventDeleteBegin    = 25,
   NMEventDeleteEnd      = 26,
   NMEventDeleteExit     = 27
} NfyMLDEvent;

/**************************************/
/* Structure for User Exit Interface  */
/* DR42570 - redesigned and rewritten */
/**************************************/

#define NOTIFYID_FASTLOAD      1
#define NOTIFYID_MULTILOAD     2
#define NOTIFYID_FASTEXPORT    3
#define NOTIFYID_BTEQ          4
#define NOTIFYID_TPUMP         5

#define MAXVERSIONIDLEN    32
#define MAXUTILITYNAMELEN  32
#define MAXUSERNAMELEN     64
#define MAXUSERSTRLEN      80
#define MAXTABLENAMELEN   128
#define MAXFILENAMELEN    256
#define MAXDBASENAMELEN    62                /* Improvement SA-5394 */

typedef struct _MLNotifyExitParm {
   UInt32 Event;                    /* should be NfyMLDEvent values */
   union {
      struct {
         UInt32 VersionLen;
         char   VersionId[MAXVERSIONIDLEN];
         UInt32 UtilityId;
         UInt32 UtilityNameLen;
         char   UtilityName[MAXUTILITYNAMELEN];
         UInt32 UserNameLen;        
         char   UserName[MAXUSERNAMELEN];
         UInt32 UserStringLen; 
         char   UserString[MAXUSERSTRLEN];
      } Initialize;
      struct {
         UInt32 FileNameLen;
         char   FileOrInmodName[MAXFILENAMELEN];
         UInt32 ImportNo;
      } FileInmodOpen ;
      struct {
         UInt32 TableNameLen;
         char   TableName[MAXTABLENAMELEN];
         UInt32 TableNo;
         char   DBaseName[MAXDBASENAMELEN]; 
      } PhaseIBegin;
      struct {
         UInt32 RecordCount;
      } CheckPoint;
      struct {
         UInt32 RecsRead;
         UInt32 RecsSkipped;
         UInt32 RecsRejected;
         UInt32 RecsSent;
      } PhaseIEnd ;
      struct {
         UInt32 dummy;
      } PhaseIIBegin;
      struct {
         UInt32 Inserts;
         UInt32 Updates;
         UInt32 Deletes;
         UInt32 TableNo;
      } PhaseIIEnd;
      struct {
         UInt32 Rows;
         UInt32 TableNo;
      } ErrorTableI;
      struct {
         UInt32 Rows;
         UInt32 TableNo;
      } ErrorTableII ;
      struct {
         UInt32 dummy;
      } DBSRestart;
      struct {
         UInt32 ErrorCode;
      } CLIError;
      struct {
         UInt32 ErrorCode;
      } DBSError;
      struct {
         UInt32 ReturnCode;
      } Exit;
      struct {
         UInt32 dummy;
      } AmpsDown;
      struct {
         UInt32 ImportNo;
      } ImportBegin ;
      struct {
         UInt32 RecsRead;
         UInt32 RecsSkipped;
         UInt32 RecsRejected;
         UInt32 RecsSent;
         UInt32 ImportNo;
      } ImportEnd ;
      struct {                                       
         UInt32 VersionLen;
         char   VersionId[MAXVERSIONIDLEN];
         UInt32 UtilityId;
         UInt32 UtilityNameLen;
         char   UtilityName[MAXUTILITYNAMELEN];
         UInt32 UserNameLen;        
         char   UserName[MAXUSERNAMELEN];
         UInt32 UserStringLen; 
         char   UserString[MAXUSERSTRLEN];
      } DeleteInit;
      struct {
         UInt32 TableNameLen;
         char   TableName[MAXTABLENAMELEN];
         UInt32 TableNo;
      } DeleteBegin;
      struct {
         UInt32 Deletes;
         UInt32 TableNo;
      } DeleteEnd;
      struct {
         UInt32 ReturnCode;
      } DeleteExit;
   } Vals;
} MLNotifyExitParm;

#ifdef I370                                                  
#define  MLNfyExit MLNfEx                                   
#endif                                                       

extern long MLNfyExit(
#ifdef __STDC__                                
                      MLNotifyExitParm *Parms
#endif                                          
);

#ifdef WIN32                                    
__declspec(dllexport) long _dynamn(MLNotifyExitParm *P)           
#else                                                             
long _dynamn( MLNotifyExitParm *P)                      
#endif                                                            
{
    FILE *fp;
 
    if (!(fp = fopen("NFYEXIT.OUT", "a")))
        return(1);
 
    switch(P->Event) {
    case NMEventInitialize :   
        fprintf(fp, "exit called @ mload init.\n");
        fprintf(fp, "Version: %s\n", P->Vals.Initialize.VersionId);
        fprintf(fp, "Utility: %s\n", P->Vals.Initialize.UtilityName);
        fprintf(fp, "User: %s\n", P->Vals.Initialize.UserName);
        if (P->Vals.Initialize.UserStringLen)
           fprintf(fp, "UserString: %s\n", P->Vals.Initialize.UserString);
        break;
    case NMEventFileInmodOpen:
        fprintf(fp, "exit called @ file open: import[%d]: %s\n",
                P->Vals.FileInmodOpen.ImportNo,
                P->Vals.FileInmodOpen.FileOrInmodName);
        break;                                                   
    case NMEventPhaseIBegin :
        fprintf(fp, "exit called @ acquistion start: Database Name : %s.\n",
            P->Vals.PhaseIBegin.DBaseName);    /* Improvement SA-5394 */
        fprintf(fp, "exit called @ acquistion start: tablename[%d] : %s.\n",
            P->Vals.PhaseIBegin.TableNo,
            P->Vals.PhaseIBegin.TableName);
        break;
    case NMEventCheckPoint :
        fprintf(fp, "exit called @ checkpoint : %d records loaded.\n",
                P->Vals.CheckPoint.RecordCount);
        break;
    case NMEventPhaseIEnd :
        fprintf(fp, "exit called @ acquistion end.\n");
        fprintf(fp, "Records Read: %d\n", P->Vals.PhaseIEnd.RecsRead);
        fprintf(fp, "Records Skipped: %d\n", P->Vals.PhaseIEnd.RecsSkipped);
        fprintf(fp, "Records Rejected: %d\n", P->Vals.PhaseIEnd.RecsRejected);
        fprintf(fp, "Records Sent: %d\n", P->Vals.PhaseIEnd.RecsSent);
        break;
    case NMEventPhaseIIBegin :
        fprintf(fp, "exit called @ application start\n");
        break;
    case NMEventPhaseIIEnd :
        fprintf(fp, "exit called @ application complete for table %d.\n",
                P->Vals.PhaseIIEnd.TableNo);
        fprintf(fp, "%d updates, %d inserts, %d deletes\n",
                P->Vals.PhaseIIEnd.Updates,
                P->Vals.PhaseIIEnd.Inserts,
                P->Vals.PhaseIIEnd.Deletes);
        break;
    case NMEventErrorTableI :
        fprintf(fp, 
               "exit called @ ET Table[%d] Drop : %d records in table.\n",
                P->Vals.ErrorTableI.TableNo, P->Vals.ErrorTableI.Rows);
        break;
    case NMEventErrorTableII :
        fprintf(fp, 
               "exit called @ UV Table[%d] Drop : %d records in table.\n",
                P->Vals.ErrorTableII.TableNo, P->Vals.ErrorTableII.Rows);
        break;
    case NMEventDBSRestart :
        fprintf(fp, "exit called @ RDBMS restarted\n");
        break;
    case NMEventCLIError :
        fprintf(fp, "exit called @ CLI error %d\n",
                P->Vals.CLIError.ErrorCode);
        break;
    case NMEventDBSError :
        fprintf(fp, "exit called @ DBS error %d\n",
                P->Vals.DBSError.ErrorCode);
        break;

    case NMEventExit :
      fprintf(fp, "exit called @ mload notify out of scope: return code %d.\n",
                P->Vals.Exit.ReturnCode);
        break;
    case NMEventAmpsDown :
        fprintf(fp, "exit called @ down amps have been detected\n");
        break;
    case NMEventImportBegin :
        fprintf(fp, "exit called @ import %d starting\n",
                P->Vals.ImportBegin.ImportNo);
        break;
    case NMEventImportEnd :
        fprintf(fp, "exit called @ import %d ending.\n",
                     P->Vals.ImportEnd.ImportNo);
        fprintf(fp, "Records Read: %d\n", P->Vals.ImportEnd.RecsRead);
        fprintf(fp, "Records Skipped: %d\n", P->Vals.ImportEnd.RecsSkipped);
        fprintf(fp, "Records Rejected: %d\n", P->Vals.ImportEnd.RecsRejected);
        fprintf(fp, "Records Sent: %d\n", P->Vals.ImportEnd.RecsSent);
        break;
    case NMEventDeleteInit : 
        fprintf(fp, "exit called @ mload delete init.\n");
        fprintf(fp, "Version: %s\n", P->Vals.DeleteInit.VersionId);
        fprintf(fp, "Utility: %s\n", P->Vals.DeleteInit.UtilityName);
        fprintf(fp, "User: %s\n", P->Vals.DeleteInit.UserName);
        if (P->Vals.DeleteInit.UserStringLen)
           fprintf(fp, "UserString: %s\n", P->Vals.DeleteInit.UserString);
        break;
    case NMEventDeleteBegin :
        fprintf(fp, "exit called @ delete app start for table[%d]: %s.\n",
                P->Vals.DeleteBegin.TableNo, P->Vals.DeleteBegin.TableName);
        break;
    case NMEventDeleteEnd :
        fprintf(fp, "exit called @ delete app done for table[%d]: %d rows.\n",
                P->Vals.DeleteEnd.TableNo, P->Vals.DeleteEnd.Deletes);
        break;
    case NMEventDeleteExit :
fprintf(fp, "exit called @ mload delete notify out of scope: return code %d.\n",
                P->Vals.DeleteExit.ReturnCode);
        break;
    }
    fclose(fp);
    return(0);
}

For 14.10.00.004

/**********************************************************************/
/*                                                                    */
/* mlnotf.c   - Sample Notify Exit for MulitLoad.                     */
/*                                                                    */
/* Copyright 1998-2013 by Teradata Corporation.                       */
/* ALL RIGHTS RESERVED.                                               */
/* TERADATA CONFIDENTIAL AND TRADE SECRET                             */
/*                                                                    */
/* Purpose    - This is a sample notify exit for MultiLoad.           */
/*                                                                    */
/* Execute    - Build Notify on a Unix system                         */
/*                compile and link into shared object                 */
/*                    cc -G mlnotf.c -o mlnotf.so                     */
/*                                                                    */
/*            - Build Notify on a Win32 system                        */
/*                compile and link into dynamic link library          */
/*                    cl /DWIN32 /LD mlnotf.c                         */
/*                                                                    */
/**********************************************************************/
#include <stdio.h>;
typedef unsigned long UInt32;
typedef enum {
   NMEventInitialize           =  0,
   NMEventFileInmodOpen        =  1,
   NMEventPhaseIBegin          =  2,
   NMEventCheckPoint           =  3,
   NMEventPhaseIEnd            =  4,
   NMEventPhaseIIBegin         =  5,
   NMEventPhaseIIEnd           =  6,
   NMEventErrorTableI          =  7,
   NMEventErrorTableII         =  8,
   NMEventDBSRestart           =  9,
   NMEventCLIError             = 10,
   NMEventDBSError             = 11,
   NMEventExit                 = 12,
   NMEventAmpsDown             = 21,
   NMEventImportBegin          = 22,
   NMEventImportEnd            = 23,
   NMEventDeleteInit           = 24,
   NMEventDeleteBegin          = 25,
   NMEventDeleteEnd            = 26,
   NMEventDeleteExit           = 27,
   NMEventPhaseIEnd64          = 28,        
   NMEventImportEnd64          = 29,                           
   NMEventInitializeEON        = 30,                          
   NMEventPhaseIBeginEON       = 31,                            
   NMEventCheckPoint64         = 32,                            
   NMEventPhaseIIEnd64         = 33,                            
   NMEventErrorTableI64        = 34,                           
   NMEventErrorTableII64       = 35,                            
   NMEventDeleteInitEON        = 36,                            
   NMEventDeleteBeginEON       = 37,                            
   NMEventDeleteEnd64          = 38                            
} NfyMLDEvent;

#define NOTIFYID_FASTLOAD      1
#define NOTIFYID_MULTILOAD     2
#define NOTIFYID_FASTEXPORT    3
#define NOTIFYID_BTEQ          4
#define NOTIFYID_TPUMP         5

#define MAXVERSIONIDLEN        32
#define MAXUTILITYNAMELEN      32
#define MAXUSERNAMELEN         64
#define MAXUSERSTRLEN          80
#define MAXTABLENAMELEN        128
#define MAXFILENAMELEN         256
#define MAXDBASENAMELEN        62                           
#define MAXUINT64LEN           24                            
typedef struct _MLNotifyExitParm {
   UInt32 Event;                      /* should be NfyMLDEvent values */
   union {
      struct {
         UInt32 VersionLen;
         char   VersionId[MAXVERSIONIDLEN];
         UInt32 UtilityId;
         UInt32 UtilityNameLen;
         char   UtilityName[MAXUTILITYNAMELEN];
         UInt32 UserNameLen;        
         char   UserName[MAXUSERNAMELEN];
         UInt32 UserStringLen; 
         char   UserString[MAXUSERSTRLEN];
      } Initialize;
      struct {
         UInt32 FileNameLen;
         char   FileOrInmodName[MAXFILENAMELEN];
         UInt32 ImportNo;
      } FileInmodOpen ;
      struct {
         UInt32 TableNameLen;
         char   TableName[MAXTABLENAMELEN];
         UInt32 TableNo;
         char   DBaseName[MAXDBASENAMELEN];      
      } PhaseIBegin;
      struct {
         UInt32 RecordCount;
      } CheckPoint;
      struct {
         UInt32 RecsRead;
         UInt32 RecsSkipped;
         UInt32 RecsRejected;
         UInt32 RecsSent;
      } PhaseIEnd ;
      struct {
         UInt32 dummy;
      } PhaseIIBegin;
      struct {
         UInt32 Inserts;
         UInt32 Updates;
         UInt32 Deletes;
         UInt32 TableNo;
      } PhaseIIEnd;
      struct {
         UInt32 Rows;
         UInt32 TableNo;
      } ErrorTableI;
      struct {
         UInt32 Rows;
         UInt32 TableNo;
      } ErrorTableII ;
      struct {
         UInt32 dummy;
      } DBSRestart;
      struct {
         UInt32 ErrorCode;
      } CLIError;
      struct {
         UInt32 ErrorCode;
      } DBSError;
      struct {
         UInt32 ReturnCode;
      } Exit;
      struct {
         UInt32 dummy;
      } AmpsDown;
      struct {
         UInt32 ImportNo;
      } ImportBegin ;
      struct {
         UInt32 RecsRead;
         UInt32 RecsSkipped;
         UInt32 RecsRejected;
         UInt32 RecsSent;
         UInt32 ImportNo;
      } ImportEnd ;
      struct {                                       
         UInt32 VersionLen;
         char   VersionId[MAXVERSIONIDLEN];
         UInt32 UtilityId;
         UInt32 UtilityNameLen;
         char   UtilityName[MAXUTILITYNAMELEN];
         UInt32 UserNameLen;        
         char   UserName[MAXUSERNAMELEN];
         UInt32 UserStringLen; 
         char   UserString[MAXUSERSTRLEN];
      } DeleteInit;
      struct {
         UInt32 TableNameLen;
         char   TableName[MAXTABLENAMELEN];
         UInt32 TableNo;
         char   DBaseName[MAXDBASENAMELEN];            
      } DeleteBegin;
      struct {
         UInt32 Deletes;
         UInt32 TableNo;
      } DeleteEnd;
      struct {
         UInt32 ReturnCode;
      } DeleteExit;
      struct {
         char RecsRead[MAXUINT64LEN];
         char RecsSkipped[MAXUINT64LEN];
         char RecsRejected[MAXUINT64LEN];
         char RecsSent[MAXUINT64LEN];
      } PhaseIEnd64 ;
      struct {
         char RecsRead[MAXUINT64LEN];
         char RecsSkipped[MAXUINT64LEN];
         char RecsRejected[MAXUINT64LEN];
         char RecsSent[MAXUINT64LEN];
         UInt32 ImportNo;
      } ImportEnd64 ;       
      struct {
         UInt32 VersionLen;
         char   VersionId[MAXVERSIONIDLEN];
         UInt32 UtilityId;
         UInt32 UtilityNameLen;
         char   UtilityName[MAXUTILITYNAMELEN];
         UInt32 UserNameLen;        
         char   *UserName;
         UInt32 UserStringLen; 
         char   *UserString;
      } InitializeEON;
      struct {
         UInt32 TableNameLen;
         char   *TableName;
         UInt32 TableNo;
         char   *DBaseName;  
      } PhaseIBeginEON;
      struct {
         char RecordCount[MAXUINT64LEN];
      } CheckPoint64;
      struct {
         char Inserts[MAXUINT64LEN];
         char Updates[MAXUINT64LEN];
         char Deletes[MAXUINT64LEN];
         UInt32 TableNo;
      } PhaseIIEnd64;
      struct {
         char Rows[MAXUINT64LEN];
         UInt32 TableNo;
      } ErrorTableI64;
      struct {
         char Rows[MAXUINT64LEN];
         UInt32 TableNo;
      } ErrorTableII64 ;
      struct {
         UInt32 VersionLen;
         char   VersionId[MAXVERSIONIDLEN];
         UInt32 UtilityId;
         UInt32 UtilityNameLen;
         char   UtilityName[MAXUTILITYNAMELEN];
         UInt32 UserNameLen;        
         char   *UserName;
         UInt32 UserStringLen; 
         char   *UserString;
      } DeleteInitEON;
      struct {
         UInt32 TableNameLen;
         char   *TableName;
         UInt32 TableNo;
         char   *DBaseName; 
      } DeleteBeginEON;
      struct {
         char Deletes[MAXUINT64LEN];
         UInt32 TableNo;
      } DeleteEnd64;
   } Vals;
} MLNotifyExitParm;

#ifdef I370                                                  
#define  MLNfyExit MLNfEx                                   
#endif                                                       

extern long MLNfyExit(
#ifdef __STDC__                                
                      MLNotifyExitParm *Parms
#endif                                          
);

#ifdef WIN32                                    
__declspec(dllexport) long _dynamn(MLNotifyExitParm *P)           
#else                                                             
long _dynamn( MLNotifyExitParm *P)                      
#endif                                                            
{
    FILE *fp;
 
    if (!(fp = fopen("NFYEXIT.OUT", "a")))
        return(1);
 
    switch(P->Event) {
    case NMEventInitialize :   
        fprintf(fp, "exit called @ mload init.\n");
        fprintf(fp, "Version: %s\n", P->Vals.Initialize.VersionId);
        fprintf(fp, "Utility: %s\n", P->Vals.Initialize.UtilityName);
        fprintf(fp, "User: %s\n", P->Vals.Initialize.UserName);
        if (P->Vals.Initialize.UserStringLen)
           fprintf(fp, "UserString: %s\n", P->Vals.Initialize.UserString);
        break;
    case NMEventFileInmodOpen:
        fprintf(fp, "exit called @ file open: import[%d]: %s\n",
                P->Vals.FileInmodOpen.ImportNo,
                P->Vals.FileInmodOpen.FileOrInmodName);
        break;                                                   
    case NMEventPhaseIBegin :
        fprintf(fp, "exit called @ acquistion start: Database Name : %s.\n",
            P->Vals.PhaseIBegin.DBaseName);      /* Improvement SA-5394 */

        fprintf(fp, "exit called @ acquistion start: tablename[%d] : %s.\n",
            P->Vals.PhaseIBegin.TableNo,
            P->Vals.PhaseIBegin.TableName);
        break;
    case NMEventCheckPoint :
        fprintf(fp, "exit called @ checkpoint : %d records loaded.\n",
                P->Vals.CheckPoint.RecordCount);
        break;
    case NMEventPhaseIEnd :
        fprintf(fp, "exit called @ acquistion end.\n");
        fprintf(fp, "Records Read: %d\n", P->Vals.PhaseIEnd.RecsRead);
        fprintf(fp, "Records Skipped: %d\n", P->Vals.PhaseIEnd.RecsSkipped);
        fprintf(fp, "Records Rejected: %d\n", P->Vals.PhaseIEnd.RecsRejected);
        fprintf(fp, "Records Sent: %d\n", P->Vals.PhaseIEnd.RecsSent);
        break;
    case NMEventPhaseIIBegin :
        fprintf(fp, "exit called @ application start\n");
        break;
    case NMEventPhaseIIEnd :
        fprintf(fp, "exit called @ application complete for table %d.\n",
                P->Vals.PhaseIIEnd.TableNo);
        fprintf(fp, "%d updates, %d inserts, %d deletes\n",
                P->Vals.PhaseIIEnd.Updates,
                P->Vals.PhaseIIEnd.Inserts,
                P->Vals.PhaseIIEnd.Deletes);
        break;
    case NMEventErrorTableI :
        fprintf(fp, 
               "exit called @ ET Table[%d] Drop : %d records in table.\n",
                P->Vals.ErrorTableI.TableNo, P->Vals.ErrorTableI.Rows);
        break;
    case NMEventErrorTableII :
        fprintf(fp, 
               "exit called @ UV Table[%d] Drop : %d records in table.\n",
                P->Vals.ErrorTableII.TableNo, P->Vals.ErrorTableII.Rows);
        break;
    case NMEventDBSRestart :
        fprintf(fp, "exit called @ RDBMS restarted\n");
        break;
    case NMEventCLIError :
        fprintf(fp, "exit called @ CLI error %d\n",
                P->Vals.CLIError.ErrorCode);
        break;
    case NMEventDBSError :
        fprintf(fp, "exit called @ DBS error %d\n",
                P->Vals.DBSError.ErrorCode);
        break;
    case NMEventExit :
        fprintf(fp, "exit called @ mload notify out of scope: return code %d.\n",
                P->Vals.Exit.ReturnCode);
        break;
    case NMEventAmpsDown :
        fprintf(fp, "exit called @ down amps have been detected\n");
        break;
    case NMEventImportBegin :
        fprintf(fp, "exit called @ import %d starting\n",
                P->Vals.ImportBegin.ImportNo);
        break;
    case NMEventImportEnd :
        fprintf(fp, "exit called @ import %d ending.\n",
                     P->Vals.ImportEnd.ImportNo);
        fprintf(fp, "Records Read: %d\n", P->Vals.ImportEnd.RecsRead);
        fprintf(fp, "Records Skipped: %d\n", P->Vals.ImportEnd.RecsSkipped);
        fprintf(fp, "Records Rejected: %d\n", P->Vals.ImportEnd.RecsRejected);
        fprintf(fp, "Records Sent: %d\n", P->Vals.ImportEnd.RecsSent);
        break;
    case NMEventDeleteInit :                            
        fprintf(fp, "exit called @ mload delete init.\n");
        fprintf(fp, "Version: %s\n", P->Vals.DeleteInit.VersionId);
        fprintf(fp, "Utility: %s\n", P->Vals.DeleteInit.UtilityName);
        fprintf(fp, "User: %s\n", P->Vals.DeleteInit.UserName);
        if (P->Vals.DeleteInit.UserStringLen)
           fprintf(fp, "UserString: %s\n", P->Vals.DeleteInit.UserString);
        break;
    case NMEventDeleteBegin :
        fprintf(fp, "exit called @ delete app start: Databasename : %s.\n",
                P->Vals.DeleteBegin.DBaseName);               
        fprintf(fp, "exit called @ delete app start for table[%d]: %s.\n",
                P->Vals.DeleteBegin.TableNo, P->Vals.DeleteBegin.TableName);
        break;
    case NMEventDeleteEnd :
        fprintf(fp, "exit called @ delete app done for table[%d]: %d rows.\n",
                P->Vals.DeleteEnd.TableNo, P->Vals.DeleteEnd.Deletes);
        break;
    case NMEventDeleteExit :
        fprintf(fp, "exit called @ mload delete notify out of scope: return code %d.\n",
                P->Vals.DeleteExit.ReturnCode);
        break;
    case NMEventInitializeEON :   /* Nothing */
        fprintf(fp, "exit called @ mload init.\n");
        fprintf(fp, "Version: %s\n", P->Vals.InitializeEON.VersionId);
        fprintf(fp, "Utility: %s\n", P->Vals.InitializeEON.UtilityName);
        fprintf(fp, "User: %s\n", P->Vals.InitializeEON.UserName);
        if (P->Vals.InitializeEON.UserStringLen)
           fprintf(fp, "UserString: %s\n", P->Vals.InitializeEON.UserString);
        break;
    case NMEventPhaseIBeginEON :
        fprintf(fp, "exit called @ acquistion start: Databasename : %s.\n",
            P->Vals.PhaseIBeginEON.DBaseName);
        fprintf(fp, "exit called @ acquistion start: tablename[%d] : %s.\n",
            P->Vals.PhaseIBeginEON.TableNo,
            P->Vals.PhaseIBeginEON.TableName);
        break;
    case NMEventCheckPoint64 :
        fprintf(fp, "exit called @ checkpoint : %s records loaded.\n",
                P->Vals.CheckPoint64.RecordCount);
        break;
    case NMEventPhaseIEnd64 :
        fprintf(fp, "exit called @ acquistion end.\n");
        fprintf(fp, "Records Read: %s\n", P->Vals.PhaseIEnd64.RecsRead);
        fprintf(fp, "Records Skipped: %s\n", P->Vals.PhaseIEnd64.RecsSkipped);
        fprintf(fp, "Records Rejected: %s\n", P->Vals.PhaseIEnd64.RecsRejected);
        fprintf(fp, "Records Sent: %s\n", P->Vals.PhaseIEnd64.RecsSent);
        break;
    case NMEventPhaseIIEnd64 :
        fprintf(fp, "exit called @ application complete for table %d.\n",
                P->Vals.PhaseIIEnd64.TableNo);
        fprintf(fp, "%s updates, %s inserts, %s deletes\n",
                P->Vals.PhaseIIEnd64.Updates,
                P->Vals.PhaseIIEnd64.Inserts,
                P->Vals.PhaseIIEnd64.Deletes);
        break;
    case NMEventErrorTableI64 :
        fprintf(fp, 
               "exit called @ ET Table[%d] Drop : %s records in table.\n",
                P->Vals.ErrorTableI64.TableNo, P->Vals.ErrorTableI64.Rows);
        break;
    case NMEventErrorTableII64 :
        fprintf(fp, 
               "exit called @ UV Table[%d] Drop : %s records in table.\n",
                P->Vals.ErrorTableII64.TableNo, P->Vals.ErrorTableII64.Rows);
        break;
    case NMEventImportEnd64 :
        fprintf(fp, "exit called @ import %d ending.\n",
                     P->Vals.ImportEnd64.ImportNo);
        fprintf(fp, "Records Read: %s\n", P->Vals.ImportEnd64.RecsRead);
        fprintf(fp, "Records Skipped: %s\n", P->Vals.ImportEnd64.RecsSkipped);
        fprintf(fp, "Records Rejected: %s\n", P->Vals.ImportEnd64.RecsRejected);
        fprintf(fp, "Records Sent: %s\n", P->Vals.ImportEnd64.RecsSent);
        break;
    case NMEventDeleteInitEON : 
        fprintf(fp, "exit called @ mload delete init.\n");
        fprintf(fp, "Version: %s\n", P->Vals.DeleteInitEON.VersionId);
        fprintf(fp, "Utility: %s\n", P->Vals.DeleteInitEON.UtilityName);
        fprintf(fp, "User: %s\n", P->Vals.DeleteInitEON.UserName);
        if (P->Vals.DeleteInitEON.UserStringLen)
           fprintf(fp, "UserString: %s\n", P->Vals.DeleteInitEON.UserString);
        break;
    case NMEventDeleteBeginEON :
        fprintf(fp, "exit called @ delete app start: Databasename : %s.\n",
                P->Vals.DeleteBeginEON.DBaseName);                 
        fprintf(fp, "exit called @ delete app start for table[%d]: %s.\n",
                P->Vals.DeleteBeginEON.TableNo, P->Vals.DeleteBeginEON.TableName);
        break;
    case NMEventDeleteEnd64 :
        fprintf(fp, "exit called @ delete app done for table[%d]: %s rows.\n",
                P->Vals.DeleteEnd64.TableNo, P->Vals.DeleteEnd64.Deletes);
        break;
    }
    fclose(fp);
    return(0);
}
Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

Teradata Connector for Hadoop 1.0.6 now available

$
0
0
Short teaser: 
Teradata Connector for Hadoop: High-performance bi-directional data movement between TD and Hadoop.
Cover Image: 

The Teradata Connector for Hadoop (TDCH) provides scalable, high performance bi-directional data movement between the Teradata database system and Hadoop system.

Overview

There are currently three editions of TDCH:

  1. Teradata Connector for Hadoop (Command Line Edition)
    • End-user tool with its own CLI (Command Line Interface).
    • Designed and implemented for the Hadoop user audience. 
  2. Teradata Connector for Hadoop (Studio Edition)
  3. Teradata Connector for Hadoop (Sqoop Integration Edition)
    • A building block that enables integration as part of a 3rd party end-user tool such as Sqoop. When used as a building block, TDCH (Sqoop Integration Edition) comes with a Java API (Application Programming Interface).
    • Designed and implemented for the Hadoop user audience.
    • Sqoop users can use the Sqoop command line interface for bi-directional data movement between Teradata and Hadoop.
    • Various Hadoop distributions such as Hortonworks and Cloudera use the Java API provided by TDCH (Sqoop Integration Edition) to integrate with Sqoop. The Sqoop integrated products, which provide bi-directional data movement between Teradata and Hadoop, are distributed by the Hadoop vendors.

Need Help? 

For more detailed information on the Teradata Connector for Hadoop, please see the attached Tutorial document as well as the README file in the appropriate TDCH download packages

For more information about Hadoop Product Management (PM), Teradata employees can go to Teradata Connections Hadoop PM.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0

TTU Install Documentation

$
0
0
Cover Image: 

The TTU Install Documentation explains how to install Teradata Tools and Utilities software from delivery media, configure the applications when required, and verify installation. Additionally, instructions for deploying software with SMS, SCCM, and TAR are provided. The documentation is available as HTML for all platforms and use cases (complete installation, deployment, and installation of specific applications). Links to PDF versions of the documents are also provided.

 

TTU 14.10 Documentation for Windows

Install all Teradata Tools and Utilities applications from DVD to:
  • Windows Operating System
  • Deploy applications with SCCM
  • Deploy applications with TAR

TTU 14.10 Documentation for Linux

Install all Teradata Tools and Utilities applications from DVD to:
  • Red Hat Enterprise Linux
  • SUSE Linux
  • IBM s390x Linux
  • Deploy applications with TAR

TTU 14.10 Documentation for UNIX

Install all Teradata Tools and Utilities applications from DVD to:
  • Oracle Solaris on AMD Opteron
  • Oracle Solaris on SPARC
  • HP-UX
  • IBM AIX
  • Deploy applications with TAR

TTU 14.10 Documentation for z/OS

Install all Teradata Tools and Utilities applications from DVD to:
  • IBM z/OS

 

TTU 14.00 Documentation for Windows

Install all Teradata Tools and Utilities applications from DVD to:
  • Windows Operating System
  • Deploy applications with SMS
  • Deploy applications with TAR

TTU 14.00 Documentation for Linux

Install all Teradata Tools and Utilities applications from DVD to:
  • Red Hat Enterprise Linux
  • SUSE Linux
  • IBM s390x Linux
  • Deploy applications with TAR

TTU 14.00 Documentation for UNIX

Install all Teradata Tools and Utilities applications from DVD to:
  • Oracle Solaris on AMD Opteron
  • Oracle Solaris on SPARC
  • HP-UX
  • IBM AIX
  • Deploy applications with TAR

TTU 14.00 Documentation for z/OS

Install all Teradata Tools and Utilities applications from DVD to:
  • IBM z/OS
Supersede
Ignore ancestor settings: 
0
Apply supersede status to children: 
0
Channel: 
Tags: 

Teradata Viewpoint 14.10 Release

$
0
0
Short teaser: 
This article is the official release announcement of Teradata Viewpoint 14.10
Cover Image: 

This article is the official release announcement of Teradata Viewpoint14.10 with an effective release date of May 6th 2013. With new enhancements in Alerting, Workload Management and Monitoring areas, this release of Viewpoint 14.10 continues to expand its scope and provide ability to monitor Hadoop systems along with Aster and Teradata systems.

Summary

The primary themes of the Viewpoint 14.10 release are to provide front end and visualization for new Teradata Database 14.10 features and Hadoop system monitoring. There are enhancements in Alerting, Monitoring and Management areas. Following are the highlights of Viewpoint 14.10:

  1. Stats Manager
  2. Hadoop System Monitoring
  3. Workload Management enhancements (Group throttle, New classifications, ability to unlock rulesets, etc..)
  4. Reports in Query Monitor portlet
  5. Alerting Enhancement

Browser support has also been updated to reflect support for Firefox 18, Chrome 24, Safari 5.1, IE 8.x and 9.x.

Stats Manager

The Stats Manager portlet complement the Auto Stats feature of Teradata Database 14.10 and will work with relaese 14.10 and later. Stats Manager allows DBAs/Users to efficiently manage their stats collection process. It is a new Tool option in Add Content | Tools menu.

Before we go into details of this new feature, let’s discuss why this is needed.  Accurate cardinality and cost helps Teradata optimizer to decide an optimal plan. Statistics provides cardinality information to Teradata optimizer. Cardinality changes significantly with bulk load jobs making stats stale and inaccurate. Some times it is even challenging for an experienced DBA to understand which object stats would be beneficial which can result in collecting extra stats or missing collections of critical stats. Collect stats jobs usually are resource intensive jobs as they have many collect stats statements; it is always good to know what is needed and what is not and save some CPU cycles. Due to scheduling issues the user may not have enough time to complete the collect stats job and there is a need to prioritize and run collect stats for important or stale stats first. Stats Manager tool simplifies some of these tasks and help users automate the stats collection process. The Stats Manager portlet can be used to:

  • View statistics on a system
  • Schedule statistic collection jobs
  • Identify missing stats
  • Detect and refresh stale statistics
  • Identify and discontinue collecting unused statistics
  • View when statistics were last collected and are scheduled for collection again
  • Set priority of collect stats statement with regards to to other collect stats statement
  • Shows CPU Utilization of collect stats jobs allowing the user to analyse if a particular job consumes more than anticipated amount of CPU.

There are two main tabs in Stats Manager– Statistics and Job.

Statistic Tab

The Statistics tab shows all objects (e.g. databases and tables) on the system, that have at least one statistic or that has at least one outstanding recommendation. The user can drill down on the data grid to navigate between the database, tables and Column. Figure 1 is example of Statistics by Database view.

Figure 1

Actions has three options - Automate enables statistics to be collected by collect jobs. Deautomate stops statistics from being collected by collect jobs.  Edit Collect Settings  allows the user to edit thresholds, sampling, and histogram settings. Information bar displays the percentage of statistics that are approved for automation, allowing the user to determine if more statistics need to be approved for automation. Percentage of automated stats have collect jobs allows the user to determine if additional collect jobs are needed. Recommendations display a list of the recommendations by an analyze job.  By clicking the link the user has an option to approve or reject recommendations given by analyze job. Statistics Table displays all objects with at least one statistic, or one recommendation that has not been approved or rejected. The table is configured using Configure Columns from the Table Actions menu. The user can automate any objects for stats collection process in this tab. This allows the user to approve statistics for collection by collect jobs. The user can also view Statistics detail reports by drilling down to stats object, see Figure 2. 

Figure 2

Job Tab

The Job tab displays the list of user-defined collect and analyzes job definitions. From this view, user can create collect stats and analyses jobs, manage existing jobs, and review job reports. Figure 3 represents the top Job tab layout. Actions has three option - New Collect Job enables user to define a job to collect statistics, New Analyse Job enables user to define a job to evaluate statistic use and make recommendations and View History lists the run status and reports for collect and analyze jobs over time.  

Figure 3

Job Definitions Table displays summary information about jobs. Drill down will show the details. Job Schedule displays a nine-day view of jobs that are running, scheduled to run, or have already run. Mouse over a date will show the list of jobs.

A Collect job generates and submits COLLECT STATISTICS statements to the Teradata Database for objects that were approved for automation in Statistic Tab. The user can assign a priority to individual COLLECT STATISTICS statements. see Figure 4.

Figure 4

The user can schedule a job to run for limited time and then have a new schedule to resume the job at a different time of the day See Figure 5.

Figure 5

An Analyze jobs option allows the user to evaluate statistics status and get statistic-related recommendations. Analyzing objects enables the user to determine where additional statistics might be useful and identify existing statistics that are used frequently or are stale. Once the recommendation are generated the  user can review and automate the object for stats collection process in Tab. See Figure 6 for various functions that Analyze job can perform.

Figure 6

The Viewpoint Log Table Clean Up feature can be used to cleanup Job results stored in DBS TDStats database.

Hadoop System Monitoring

Teradata Viewpoint 14.10 supports Hadoop system monitoring for Hortonworks provided Hadoop solutions packaged as part of Aster 3 Big Analytic Appliance. A new Hadoop Services portlet allows users to monitor status of various services running on the Hadoop systems. Using expandable service view on MapReduce, HDFS and HBase users can view key metrics details for the selected services (See Figure 7).

Figure 7

 Aster Node Monitor portlet is now renamed as Node Monitor portlet to monitor both Aster and Hadoop systems. Using Node monitor portlet for Hadoop systems, users can view node level metrics, available Hadoop services, and the status of services for each node on the system. User can also view hardware statistics details such as CPU usage, memory usage and network activity. Navigating through the Hadoop system topology, users can also view detailed service component and JVM metrics for the HDFS and MapReduce services. (See Figure 8)

Figure 8

Like Aster system monitoring, Hadoop systems monitoring was also integrated to with the existing portlets. The usability, look and feel of the portlets were maintained but underlying data and metrics corresponded to monitored system which is Hadoop in this case. Below are the existing portlets that were modified to support Hadoop system monitoring

  • Alert Viewer – View all the Alerts logged for Hadoop systems.
  • Capacity Heatmap– Displays trends for key metric usage related to system, HDFS and MapReduce.
  • Metrics Analysis - Displays and compares trends for key metric usage related to system, HDFS and MapReduce in a graphical format across different Hadoop systems.
  • Metrics Graph– Displays trends for key metric usage related to system, HDFS and MapReduce in a graphical format.
  • Space Usage– Monitors space usage on a Node such as total space, current space, percent in use and available space.
  • Admin – Provides the ability to add Hadoop systems and define Alerts for Hadoop systems.
  • System Health- Hadoop systems can be identified a “H” in the system's icon and drill down shows all the key metrics related to Hadoop system. See Figure 9

Figure 9

Reports in Query monitor

In Viewpoint 14.10 we added three new reports in Query Monitor.

  1. Multi-Session report: New option in Query Monitor By Utility|By Job was added to display all the utility jobs that are running with drill down capabilities for individual sessions logged on by a particular Utility Job and the ability to further drill down to see session details.  (See Figure 10)
  2. Hot AMP report: A new option By Vproc|By Skewed AMP displays AMPs with most skewed sessions that exceeded the CPU skew threshold set in the PREFERENCES view. (See Fig 10)
  3. By PE report: A new option By Vproc|By PE  displays total number of sessions logged on to the PE and CPU value for the PE. (See Fig 10)

Figure 10

Teradata Workload Management enhancements

Teradata Viewpoint 14.10 introduced group throttles where a user can define throttle on a group of workloads. We also added new classifications by UDF, UDM, memory usage and collect stats. These features are dependent on Teradata 14.10. In Teradata Viewpoint 14.10 user can now unlock any ruleset if they have the appropriate permissions. Users can now also model a system Ruleset this is useful for comparing the Workload management features for different platforms (Appliance v/s EDW) or for different versions of Teradata.

Alerting Enhancement

Various new Alert options and Alert type were added in this release of Viewpoint. 

  • An option to send an alert for Teradata Database restart was added.
  • In Session alert include or Exclude users option was added. If user wants to define a session alert for small set of users they need not add other users to the exclude user list instead include user option can be used. It also supports splat wildcard. (See Figure 11)

Figure 11

  • Users can now send an alert for long running sessions using newly added Active time alert option in Session Alert type.
  • Spool space (MB) alert option was added in session alert to send an alert if a session uses more than anticipated amount of spool space.
  • Delta I/O (logical I/Os) alert option was added to send an alert for a session consuming excessive logical I/O during the last collection interval.
  • In Database Space alert type users can now specify threshold for Current Spool Space (%) and Peak Spool Space (%) to send an alert when Current Spool Space and Peak Spool Space exceeds the threshold. Splat wildcard support was added to Database space include/exclude user list
  • A new Alert type Table space was added late in the Viewpoint 14.01 release with a new alert option on DBC.TransientJournal table with ability to specify current perm and skew threshold.

Lock Logger In Viewpoint 14.10 we modified Lock Logger architecture for Teradata Database14.10 and later. When Viewpoint 14.10 is used with Teradata Database 14.10 or later the Lock Info collector uses the data written to the DBQL Lock Log table to capture lock information therefore DBQL query logging must be enabled with the “WITH LOCK” option.

Finally, please refer the associated Viewpoint Configuration Guide for details of the upgrade process and the User Guide for details of new features.

We continue to have a voluminous release with copious features across a number of strategic areas. Hope you avail the new additions and improvements in Teradata Viewpoint 14.10. We always look forward to your thoughts and comments.

Ignore ancestor settings: 
0
Channel: 
Apply supersede status to children: 
0
Viewing all 780 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>