Tera Data

You might also like

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 14

Logtable restart considers script to be in the same state.

Is there a general list of error codes and what they mean?


currently, I am getting the following error:
12:32:59 UTY1005 Script file has been altered at line 3 prior to restart.
12:32:59 UTY6215 The restart log table has NOT been dropped.
12:32:59 UTY2410 Total processor time used = '0.015625 Seconds'
See the Messages manual for explanations.

You should have a message a few lines earlier that indicates some prior
execution of MLOAD against this table was started but not completed (nor
cleaned up) so MLOAD thinks this should be a restart. But the script you
are executing isn't the same as the one that failed, so UTY1005 says it
can't autorestart either. See the MultiLoad manual sections on
Terminating / Restarting jobs.

Any change in the script while restarting log table results in UTY1005 altered error
as mentioned above.

Backup and Restore only statistics in Teradata


Way to restore old statistics or copy only statistics from table A to Copy of table A...?
You can restore old stats if you copy them to a copy of the table (with or without data).
collect stats on database.table_a_copy from database.table_a;
to restore: collect stats on database.table_a from database.table_a_copy;
I'm not sure why you would want to do this because if you restore a table using ARC, stats are also
restored and if you create a copy of the table you can specify 'with stats'.

Pivot based on date


Good day all,
I have data that consists of 3 months of charges for each line item.
Bill_No
Bill_Month
Charge
00121
4/10/2015 12:00:00 AM
1300.00
00121
5/10/2015 12:00:00 AM
1300.00
00121
6/10/2015 12:00:00 AM
1400.00
I would like to pivot the data so that my result would be as such:
Bill_No
April
May June
00121
1300
1300 1400

Any assistance suggestions would be appreciated.


i do understand that I will need to use cast((cast Bill_Month as format 'mmm')) as char(3)) as
Bill_Month to do the conversion for that part.
I am guessing that case may be needed with rank for the actual pivioting function, but I am just not
sure.
Thanks for looking

select BILL_NO,
max(case when
max(case when
max(case when
max(case when

Bill_Month
Bill_Month
Bill_Month
Bill_Month

='Apr'
='May'
='Jun'
='Jul'

then
then
then
then

MNTH_MRC
MNTH_MRC
MNTH_MRC
MNTH_MRC

end)
end)
end)
end)

as
as
as
as

April,
May,
June,
July

select BILL_NO, CIRCUIT_NO,CYCLE_NO,


max(case when Bill_Month = cast ((cast(ADD_MONTHS (CURRENT_DATE, -3)as format 'mmm'))as char(3))
then MNTH_MRC end) as "3rd_Mnth",
max(case when Bill_Month = cast ((cast(ADD_MONTHS (CURRENT_DATE, -2)as format 'mmm'))as char(3))
then MNTH_MRC end) as "2nd_Mnth",
max(case when Bill_Month = cast ((cast(ADD_MONTHS (CURRENT_DATE, -1)as format 'mmm'))as char(3))
then MNTH_MRC end) as "1st_Mnth"
from

Need this tricky logic .. Urgent help


Hi.. Here is a tricky scenario am trying to solve. Please help me out.
Table A1: 3 columns. And this table keeps updating once in 2 days.
id,name,fav_color,date
2051,joe,white,07/21
2052,John,green,07/21
After 2 days records are updated and table A1 looks like this.
id,name,fav_color,date
2051,joe,blue,07/23
2052,Rick,green,07/23
Table A2: this is a history table that captures all the changes being done to table A1.
id,updated_column,update_dt,old_value,new_value
2051,fav_color,07/23,white,blue
2052,Name,07/23,John,Rick
Now business wants to see a monthly snapshot of table A1 by the end of the month.
basically I want a monthly snapshot like this.
id,name,fav_color,date
2051,joe,white,07/21

2051,joe,blue,07/23
2052,John,green,07/21
2052,Rick,green,07/23
please tell me how to achieve this.
I have very less time to figure this out, so am posting here :). Thank you all

Hi ,
We can use UNION ALL , as below
select
id,
name,
fav_color,
date
From Table_A1
union all
select
id,
updated_column,
old_value,
update_dt
from table_a2

HOw to join on nearest lower value in teradata sql


I have two tables in teradata
table1
1
2
3
5
table 2
2

3
4
6
output
2
3
4
6

2
3
3
5

if matcing value found col should join on matching value else nearest lower value.

select t2.*, t1.*


from table2 as t2
join table1 as t1
on t1.i = (select max(t3.i) from table1 as t3 where t3.i <=t2.i)

But this result in a product join, which might be ok if you got another joiin condition.
I prefer following approach:
UNION both columns, find the last value using an OLAP function and then join back to both tables:

1
2
3
4
5
6
7
8
9
10
11

select v2,
max(v1)
over (order by coalesce(v1,v2), v2
rows unbounded preceding) as newV1
from
(
select i as v1, null as v2 from table1
union
select null as v1, i as v2 from table2
) as dt
qualify v2 is not null

Put this in a Derived Table and join back:

1
2
3
4
5

select t1.*, t2.*


from table1 as t1 join (previous_query) as dt
on t1.i = dt.newv1
join table2 as t2
on dt.v2 = t2.i

Depending on the actual data this might be much more efficient...

What is the Criteria to choose best Primary Index ?


Be careful while choosing the primary index because it affects the data storage
and performance.

The following are the important tips while choosing the primary index.

1. Data Distribution.
You need to analyze the number of distinct values in the table . If the primary index of the
table contains less number of null values and more distinct values,it will give better
the performance.

2. Access frequency.
The column has to be frequently used in the where clause during the row selection.
The column should be that which is frequently used in join process.

3. Volatility
The column should not be frequently changed.

Running in loop after every certain time

.logon ip/username, password


sel count(*) dbc.tables
.hang 100
=1;
.logoff
.quit

Query and perfomance tuning is not something you can get off the shelf. It depends upon the query.
But since you asked basic query tuning steps I will lay out few things. But keep in mind that over
collection of your statistics or unnecessary stats collection will have a negative impact.
Pick those queries that are worthy tuning. For instance queries that consume lots of CPU, IO and spool
consumption, skewed (PJI and UII greater than 3). Also check if these queries are run frequently.
Checkpoints:
1. Having state and obsolete statistics is much worse than having no stats.

2. Refresh your stats and make sure you have stats collected on the where and ON criteria. Make sure
you have stats collected on your NUSI.
3. Check in the queries if a join condition is missed.
4. if there are UNION operations as opposed to UNION ALL and also disntinct on the top which is
redundant.
5. Assuming that you have done all this and even then your query is performing poorly, I would
recommended to check your explain plan and look for some key indicators showing poor performance
of your queries: redistribution, no confidence, product joins, updating of primary indexes (if any, I've
seen at different sites doing this, data redistribution is costly in an update operation).
6. If in your explain plan you have some redistributions going on then make sure you have a proper
join condition in place, Implicit data type conversion in joins or you might have missed a join criteria
or you are not using a PI or at least proper column.
7. if you have a run away query chewing up resources then make sure your join criteria are atleast
same data type with stats in place. Make sure you have a equi join rather than cartesian and try to
make the query simple rather than huge constraint criteria (predicates) this will throw-away the
estimation with the row retrieval.
I only listed a very few there are number of ways of looking and tuning. you can implement global or
join indexes hash indexes etc. But one common start point is your explain plan. Hope this helps to
further in your perfomance tuning efforts.
. Why is this query taking such a long time? If I remove qualify Row_number() Over
(Order by SLSTY) between 1000 and 2000 statement and replace it with Order bySLSTY
statement, it runs pretty fast (less than 10 secs).
Select
Business,
Item_no,
Brand,
Vendor,
Sum(sls_ty) slsty,
Sum(sls_ly),
Sum(sls_reg_ty)
from database_name1.IP_Table_Name
Group by
Business,
Item_no,
Brand,
Vendor
qualify Row_number() Over (Order by SLSTY) between 0 and 1000

Error Code 6706: The string contains an untranslatable character.


Greetings, running this query I get the error message. Error Code 6706: The string contains an
untranslatable character. I find it very strange and really had not seen him, if I could give an
indication of how to find the solution they are grateful.

There is some inplicit character conversion ongoing.


Most likely the character set of your source table differ from your target table.
If your source table contains unicode character fields which also contains real unicode values and your
target table field is defined as latin character set you will get this error code.
Check the Translate (with Error Option) and Translate_Check function.
There is some inplicit character conversion ongoing.
Most likely the character set of your source table differ from your target table.
If your source table contains unicode character fields which also contains real unicode values and your
target table field is defined as latin character set you will get this error code.
Check the Translate (with Error Option) and Translate_Check function.
I get this error when I CAST an Integer/Decimal/Float field to VARCHAR and concatenate to VARCHAR
field in a SELECT clause. e.g.
SELECT CAST(COALESCE(A,"") AS VARCHAR(11)) || '|' || TRIM(B) FROM <DB>.<TB>
gives 6706 error (Untranslatable character)
However,
SELECT CAST(A AS VARCHAR(11)), TRIM(B) FROM <DB>,<TB>
works fine.
So, I am not sure if it is a Unicode/Latin issue.

ts likely to be check Dieters reply on http://forums.teradata.com/forum/database/implicit-data-type-conversion-tochar-ends-up-in-unicode


you can validate by

create volatile table sql_test as (


SELECT CAST(COALESCE(A,"") AS VARCHAR(11)) || '|' || TRIM(B)
) with no data
no primary index
on commit preserve rows;

as test FROM <DB>.<TB>

show table sql_test;


I guess test will be unicode
SELECT COALESCE(cast(A as varchar(11)),"" ) || '|' || TRIM(B)
1

as test FROM <DB>.<TB>

Unicode vs Latin
Hi,
I see that Teradata uses UNICODE for data dictionary tables or system tables and Latin for user data.
May I know the reasons and advantages of doing this?

SOUNDEX
Returns a character string that represents the Soundex code for string_expression
The following process outlines the Soundex coding guide:

1Retain the first letter of the name.


2Drop all occurrences of the following letters:
A, E, I, O, U, Y, H, W
in other positions.

3Assign the following number to the remaining letters after the first letter:
1 = B, F, P, V
2 = C, G, J, K, Q, S X, Z
3 = D, T
4=L
5 = M, N
6=R

4If two or more letters with the same code are adjacent in the original name or adjacent except for any intervening H or
W, omit all but the first.

5Convert the form letter, digit, digit, digit, by adding trailing zeros if less than three digits.
6Drop the rightmost digits if more than three digits.
7Names with adjacent letters having the same equivalent number are coded as one letter with a single number
Surname prefixes are generally not used.

Examples of Non Valid Usage


The following SOUNDEX examples are not valid for the reasons given in the table.

Statement

Why the Statement is Not Valid

SELECT SOUNDEX(12345);

12345 is a numeric string, not a character string.

SELECT SOUNDEX('b');

The characters and are not simple Latin characters.

Tab and new line symbols


Hi all,
How can I add Tab and new line symbol to a string without using UDF?

Thanks.
Similar to this Select '$' || x'0A' || '$';

So I gues you need to be more specific on your question


How can I add Tab and new line symbol to a string without using UDF?
The answer is concat.
What is the ascii number for LF and TAB?
The answer is '0A' and '09'
or for unicode
'000A' and '0009'

How to get ASCII Value of Character


I need ASCII value of characters, e.g. A=65 , B=66.I have tried Select CHR2HEXINT('A') which returns
me the HEX value (0041) of the char passed as parameter.Anyone has any idea how to convert HEX
(0041) to ASCII(65) using FORMAT or some other built-in function.Thanks
Assuming we are in the ASCII limit (0-256)SELECT COL1, ((CHAR2HEXINT(COL1) (INTEGER) (NAMED
HVAL))/100*16*16 + (HVAL MOD 100)/10 * 16 + (HVAL MOD 10)) AS ASCIIVALFROM MYTABLE ;
I don't think the above method works for letter J. Try the character 'J' for example. The hex '4A' can't
be converted to an integer.

It does not work between J and O (both upper and lower case) and Zz.I think if one has to use it more
often either create a table for HEXtoASCII values and do lookup on that table. For that matter it can
be CHARtoASCII table. It is one time INSERT for the all ASCII values but can be reused.Or, just create
the UDF function once and for all. Anyways, I was only looking for a short term solution at this time.
Thanks
Here is a non UDF solution: case substring(char2hexint(col1) from 1 for 1) when '0' then 0 when '1'
then 1 when '2' then 2 when '3' then 3 when '4' then 4 when '5' then 5 when '6' then 6 when '7' then
7 when '8' then 8 when '9' then 9 when 'A' then 10 when 'B' then 11 when 'C' then 12 when 'D' then
13 when 'E' then 14 when 'F' then 15 end * 16 + case substring(char2hexint(col1) from 2 for 1) when
'0' then 0 when '1' then 1 when '2' then 2 when '3' then 3 when '4' then 4 when '5' then 5 when '6'
then 6 when '7' then 7 when '8' then 8 when '9' then 9 when 'A' then 10 when 'B' then 11 when 'C'
then 12 when 'D' then 13 when 'E' then 14 when 'F' then 15 end as asciivalfrom mytable; *** Query
completed. 12 rows found. 2 columns returned. *** Total elapsed time was 1 second.col1 asciival--------------A 65B 66C 67D 68E 69F 70G 71H 72I 73J 74K 75L 76

SELECT 'a' AS reqchar,


(((SUBSTRING(CHAR2HEXINT(reqchar),3,1)) (INTEGER) )* 16 )+
((SUBSTRING(CHAR2HEXINT(reqchar),4,1)) (INTEGER)) AS asciival
if you are on Teradata >= 14.00 you can try:
SELECT ASCII('A');
->> Returns the decimal representation of the first character in str_expr as a NUMBER value.

utf8TO16 : why is it Latin to unicode conversion function when utf8


itself is unicode encoding?
UDF : utf8to16 function:
why is it known as Latin to Unicode conversion function when utf8 and 16 both represent unicode
encoding?
UTF8TO16? There's no such function built-in.
There's only TransUnicodeToUTF8 and TransUTF8ToUnicode, but those are compression UDFs
(changing internal storage from UTF6 to UTF8 to save space).
TRANSLATE(col USING LATIN_TO_UNICODE) converts from Latin to Unicode.
Hi Dieter,
can you please help in converting data from UTF8 to LATIN1_0A or ISO8859_1
tried using this below query but i am getting error the string contains untranslatable character. In the
same way will have to TRANSLATE(COL1 USING UTF8_TO_LATIN1_OA) AS COL1 it says that unknown
character sting.
can we do this by writing sql query or not if so how? advice me.
SEL
TRANSLATE(COL1 USING UNICODE_TO_LATIN) AS COL1
FROM
(

SEL
CASE
WHEN DB.COL1 IS NOT NULL THEN ' ' || DB.COL1
ELSE ''
END
FROM TABLEA
) DT COL1
f that column contains any non-Latin characters TRANSLATE will fail, you might add WITH ERROR to
replace bad chars with an error character (hex '1A'):

TRANSLATE(COL1 USING UNICODE_TO_LATIN WITH ERROR)

There's no ISO8859_1 or LATIN1_0A character set in Teradata, only Latin and Unicode.
A session character set might be LATIN1_0A, then the Unicode data is automatically converted.
But if you got Unicode data why do you want to convert it to Latin?
Dieter

CREATE VOLATILE TABLE LTE AS (


SEL TRANSLATE(UPDATE_USER USING LATIN_TO_UNICODE WITH ERROR) AS LTE FROM SAGAR.RUN_DATE
) WITH NO DATA ON COMMIT PRESERVE ROWS
SHOW TABLE LTE

INSTR
The following query returns the result 20 indicating the position of 'ch' in 'chip'. This is the second occurrence of 'ch'
with the search starting from the second character of the source string.
SELECT INSTR('choose a chocolate chip cookie','ch',2,2);

Error 3782 Improper column reference in the search condiditon of a


joined table
I'm geting an error 3782 when trying to add the last 2 fields in the first selet statement and I don't
know what is wrong. Any help would be appreciated. Thanks.....
select
mp.mop_desc as "Card Type",
ph.CR_CARD_NBR as "CC Last 4",
ph.name as Card_Holder,
cast(ph.pymt_dt as date format 'mm/dd/yyyy') as PayDt,
ph.pymt_amt,
s.grp_brn_id as CC_Charge_GPBR,
rb.rnt_agr_nbr as Rent_Agr,
rb.chkoutstn as Renting_GPBR,
rb.chkinstn as Close_GPBR,
cast(rb.co_tmsp as date format 'mm/dd/yyyy') as Check_Out,
cast(rb.ci_tmsp as date format 'mm/dd/yyyy') as Check_In,
rb.ecr_lgcy_resv_nbr as Ralph#,
rb.ecr_ticket_no as ECARS2#,
rb.dvr_frst_name as Driver_First,

rb.dvr_srnm as Driver_Last,
cast(ph.paph_fin_trans_ref_id as decimal(19,0)) as refid,
fin_tran.paymt_mdia_proc_sys_cde as Settlement, *****
fin_tran.prim_acct_frst_six_dgt_nbr as First_Six ******
from
rfs_rv.pre_applied_pymts_hdr ph
join
rfs.stns s on ph.pymt_stn_id = s.stn_id
join
rfs.mthd_of_pymts mp on ph.mop_mop_cd = mp.mop_cd
join
rfs_rv.pre_applied_pymts_det pd on ph.pymt_id = pd.pap_pymt_id
join
paymt.fin_tran ft on fin_tran.fin_tran_ref_id =cast(ph.paph_fin_trans_ref_id as decimal(19,0))
left outer join (
select
ra.rnt_agr_nbr,
ra.ecr_ticket_no,
ra.ecre_rent_cntrct_nbr,
ra.ecr_lgcy_resv_nbr,
ra.co_tmsp,
ra.ci_tmsp,
sto.grp_brn_id as ChkOutStn,
sti.grp_brn_id as ChkInStn,
dr.dvr_srnm,
dr.dvr_frst_name
from
rfs_rv.rnt_agrs ra
join
rfs.stns sto on ra.sta_stn_id_orig_co = sto.stn_id
join
rfs.stns sti on ra.sta_stn_id_orig_co = sti.stn_id
join
rfs_rv.dvr_rras dr on ra.rnt_agr_nbr = dr.rdy_rnt_agr_nbr
where
dr.main_dvr_flg = 'MR'
) rb on pd.ticket_no = rb.ecr_ticket_no
where
mp.mop_desc = ?
and ph.CR_CARD_NBR = ?
and ph.pymt_dt between '2015-05-30 00:00:00' and '2015-06-26 23:59:59'
and ph.cust_nbr = ?
Hard to be sure without being able to explain/run the sql but it looks like you have
paymt.fin_tran aliased as ft and you are refering to it in you select list as fin_tran
And in the ON condition: fin_tran.fin_tran_ref_id instead of ft.fin_tran_ref_id
This one is where the error is coming from. Once you fix that, then the one RGlass pointed out in the

select list would ad an unconstrained join of fin_tran to your query making it likely to get incorrect
answers and run a very long time.
Thank you for your advice. That did the trick.

BTEQ date import error


Hi All,
I have text file delimited with '|'.
emp_id|DOB(mm-dd-yyyy)|DOJ(dd/mm/yyyy)

*DOJ - Date of joining

101|12-25-1986|24/10/2008
102|01-23-1982|28/11/2006
.IMPORT VARTEXT '|' FILE=C:ABC.txt;
.REPEAT *
USING
emp_id (VARCHAR(3)),
emp_dob (VARCHAR(10)),
emp_doj (VARCHAR(10)),
INSERT INTO my_db.my_emp_tb
values
(
:emp_id,
:emp_dob,
:emp_doj
);
==> ERROR 2666 : Invalid date supplied.

Could anybody please tell me how to import these date formats?


It working fine with date format 'YYYY-MM-DD'.
Try
INSERT INTO my_db.my_emp_tb
values
(
:emp_id,
CAST(:emp_dob AS DATE FORMAT 'MM-DD-YYYY'),
CAST(:emp_doj AS DATE FORMAT 'DD/MM/YYYY')
);

HTH.

Cheers.
Carlos.

.SET ECHOREQ ON
Partition Name wise Row Count of a partition table
How to find partition_name,count(*) from a partition table.
Each Partition wise row count and row count in No Partition.
There's no partition nyme, just a number:
select partition, count(*)from tab group by 1 order by 1

You might also like