Showing posts with label created. Show all posts
Showing posts with label created. Show all posts

Wednesday, March 28, 2012

Normalizing my Database

Please i have created some tables Delivary with this columns (DelivaryId,DelivaryNo,QtyRecieved,DelivaryDate,ProductId) and Product with this columns (ProductId,ProductCode,ProductName,ProductPrice) as you can see the product table keeps record of products whlie the delivary table keeps record of stock supplied. I will like to create another table that will keep record of stock sold out (Invoice Table) based on the qty recieved from the delivaries table

Please help

I am thinking you need two more tables one for the inventory and another for the dates, delivery date, recieved date, purchase date and more. You want to separate out functional dependencies. Try the links below for sample data models and normalization tutorial. Hope this helps.

http://www.databaseanswers.org/data_models/

http://www.utexas.edu/its/windows/database/datamodeling/rm/rm7.html

|||

Caddre is right, you probably need at the least, an inventory table (Tracks current stock), a receiving table (Tracks products received, date/time, etc), and a sales table (Tracks products sold, date/time) in addition to your products table.

Create triggers on the receiving table to update the quantities when records are added/updated/deleted.

Same for the sales table.

|||

Motley:

Caddre is right, you probably need at the least, an inventory table (Tracks current stock), a receiving table (Tracks products received, date/time, etc), and a sales table (Tracks products sold, date/time) in addition to your products table.

Create triggers on the receiving table to update the quantities when records are added/updated/deleted.

Same for the sales table.

hi thanks for ur response. The recieving table is my Delivaries Table (DelivaryId,DelivaryNo,Quantity,Date) if i understand u very well u mean i sholud create another table for keeping current stock record but if i do what then happens to the quantity column in the delivaries table

|||It would be the quantity delivered. You would then use that in the insert/update trigger to update the inventory table's quantity (Or what is commonly called "on hand" quantity).|||i think im begining to understand u but please try to expanciate more on what trigger does to a table or table column|||

CREATE TRIGGER (Transact-SQL)

Designing DDL Triggers

You can find more if you search on internet.

Monday, March 26, 2012

Normalisation

I'm trying to normalise a badly designed database. I have created new tables that split one huge table up, and I think it is now in 3NF.

My problem is that once i've created the new tables, How do I get the data out of the huge table into the smaller ones. Are there SQL commands to do this? I'm using Oracle 8i.

Any help would be greatly appreciated.

Many thanks,

Francis.Hello,

there are many ways to do this :
For my explanation I use BIG as the name for the bigtable and SMALL1, SMALL2 for the new designed tables :

1)
Create a SQL statement that give you back the dateiled result of table SMALL1 and SMALL2

After that use
INSERT INTO SMALL1 (field1, field2...)
SELECT field1, field2 ... FROM big WHERE ...

The Select part is your designed Select statement .. to this with both queries ... OK ?

2) Export the datas with SQLLOADER and import them ... (see SQLLOADER documentation)

3) Write a PL/SQL procedure to select the datas from the big table and put them into the new one ...

If you have problems writing the procedure - just let me know - I will help you ...

Hope this helps ?

Greetings
Manfred Peter
(Alligator Company)
http://www.alligatorsql.com

*** did you ever use Oracle8i, MySQL, Interbase and Birdstep RDM Server databases at the same time ? No ? Use AlligatorSQL ***|||Thanks for you're prompt response.

I had thought of writing it as;

CREATE TABLE SMALL1(Column3, Column2, Column6)
AS
SELECT Column3, Column2, Column6
FROM BIGTABLE;

and do the same for all the other small tables. Then drop the big table.

Would this work?

Many thanks again.|||Hello,

thats perfect and very quick too ... I though, that you have already created the tables ...

Do it in this way :)

By the way ... are you interested in a new Oracle tool ... just send me an E-Mail to webmaster@.alligatorsql.com and I give you some information about AlligatorSQL.

Thanks

Manfred Peter
(Alligator Company)
http://www.alligatorsql.com

noob: How do i run pl/sql on ms sql?

Hi experts,

I'm using MS SQL 2000. The closest i could get to having pl/sql within ms sql was tru its stored procedures feature.

I created a new stored procedure within the built-in Northwind database. I pasted the following inside:

--------------------
declare

vname Employees.firstname%TYPE;

begin

SELECT firstname INTO vname FROM Employees
WHERE firstname = 'Nancy';

dbms_output.put_line ('Name is ' || vname);

EXCEPTION

when NO_DATA_FOUND then
dbms_output.put_line ('no data found');

when TOO_MANY_RECORDS then
dbms_output.put_line ('too many records');

END;

--------------------

I checked the syntax and i get some error about the employees table.
Error 155: 'Employees' is not a recognized cursor option

Any idea?

Thanks..You've got some serious reading to do...there is a major methodolgy switch you'll have to understand (not to mention sytax) between the two...

Oracle has a lot of nice "built in" features, and at the same time is extremely painful...

For example, ylucan not use %TYPE...

YOu must explicitlety declare EVERYTHING...

This is how you write what your example is trying to do...

CREATE PROC mySproc99
AS

BEGIN

DECLARE @.vname varchar(40), @.Rowcount int, @.Error int

SELECT @.vname = Lastname
FROM Employees
WHERE firstname = 'Nancy'

SELECT @.RowCount = @.@.ROWCOUNT, @.Error = @.@.Error

IF @.Error <> 0
BEGIN
PRINT 'Error Condition ' + CONVERT(varchar(5),@.Error)
Return @.Error
END

IF @.RowCount <> 1
BEGIN
PRINT 'Multiple Rows Found Error'
Return 2
END

IF @.RowCount = 1
BEGIN
PRINT 'Name is '+ @.vname
Return 0
END
END

GO

EXEC mySproc99
GO

DROP PROC mySproc99
GO

Friday, March 23, 2012

NooB Scripting Question How to use mailto

I am a noob to sql and asp programming. I am working on a db for the
gaming squad I am a member of. One of the pages I created is a roster
list of all the squad members. Part of this roster is listing each
member's email address. What several people have asked of me is to
make it so the email addresses can be clicked on to open their email
programs, just as html allows the mailto function to work.

Here is a copy of the coding I am currently using:
<td align="center"><font face="Arial"
color="#C0C0C0"><%=rst("email")%></font></td>
This will list the email address of the person, but how do I make it
so the line will function as a mailto command?
The page is saved as an .asp.

Thanks for any advice!

Davedavestrike (davestrike@.nventure.com) writes:
> Here is a copy of the coding I am currently using:
><td align="center"><font face="Arial"
> color="#C0C0C0"><%=rst("email")%></font></td>
> This will list the email address of the person, but how do I make it
> so the line will function as a mailto command?
> The page is saved as an .asp.

Actually, I don't have the slightest idea, nor do I feel to compelled
to have one, since this is an SQL Server forum, and not related to ASP.

But they say ASP has something to do with HTML, in which case I would
try with:

<td align="center"><font face="Arial" color="#C0C0C0">
<A HEF="mailto:%=rst("email")%"><%=rst("email")%></A>
</font></td
But if that does not work (and it probably doesn't) you should look
for an ASP forum.

--
Erland Sommarskog, SQL Server MVP, sommar@.algonet.se

Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp

Noob need help with recursion

I created a table called MyTable with the following fields :

Id int
Name char(200)
Parent int

I also created the following stored procedure named Test, which is supposed to list all records and sub-records.

CREATE PROCEDURE Test
@.Id int
AS

DECLARE @.Name char(200)
DECLARE @.Parent int

DECLARE curLevel CURSOR LOCAL FOR
SELECT * FROM MyTable WHERE Parent = @.Id
OPEN curLevel
FETCH NEXT FROM curLevel INTO @.Id, @.Name, @.Parent
WHILE @.@.FETCH_STATUS = 0
BEGIN
SELECT @.Id AS Id, @.Name AS Name, @.Parent AS Parent
EXEC Test @.Id
FETCH NEXT FROM curLevel INTO @.Id, @.Name, @.Parent
END

CLOSE curLevel
DEALLOCATE curLevel
GO

I added a MxDataGrid and DataSourceControl.
SelectCommand property of the DataSourceControl = EXEC Test 0

When I run the aspx page, it only shows 1 record. I tried to change the parameter to 1, 2, 3 but it always shows only 1 record (not the same tho).

Is there something wrong with the stored procedure ?Ok, I made some modifications to make it work properly but there is some limitations. I have to store the results in a temp table. Can I do something similar but without the temp table ?

(Note : I changed some field/table names)


-- ----------------------
-- Fill and select the temp table where the nodes and sub nodes id are stored
-- ----------------------
CREATE PROCEDURE GetNodesAndSubNodes
@.NodeId INT
AS

DELETE FROM TmpNodesAndSubNodes
EXEC GetNodesAndSubNodesRecursive @.NodeId
SELECT * FROM TmpNodesAndSubNodes
GO

-- ----------------------
-- Fill the temp table with the id's of the nodes and it's sub nodes
-- ----------------------
CREATE PROCEDURE GetNodesAndSubNodesRecursive
@.NodeId INT
AS

-- Store the node id into the temp table
INSERT INTO TmpNodesAndSubNodes VALUES(@.NodeId)

-- Declare a local cursor to seek in the nodes table
-- and some variables to retreive the values
DECLARE @.CurrentNodeId INT
DECLARE @.CurrentParentNodeId INT
DECLARE curLevel CURSOR LOCAL FOR
SELECT Id FROM Nodes WHERE ParentNodeId=@.NodeId

-- Fetchs the records and call the recursive method for each node id
OPEN curLevel
FETCH NEXT FROM curLevel INTO @.CurrentNodeId
WHILE @.@.FETCH_STATUS = 0
BEGIN
EXEC GetNodesAndSubNodesRecursive @.CurrentNodeId
FETCH NEXT FROM curLevel INTO @.CurrentNodeId
END

-- Clean up
CLOSE curLevel
DEALLOCATE curLevel

GO

|||Aaarg :)

I decided to work on a non recursive solution.

Non-Report Hyperlinks on Report Manager Home Page?

Is it possible to add hyperlinks to the Report Manager Home Page that link to other aspx pages? I have created some associated data entry aspx pages that I want to enable access to from the Report Manager Home Page (if possible).

Thanks!

It isn't...maybe you could run it inside an IFRAME, surrounded by other UI (including "your" hyperlinks)?|||

pittpanther wrote:

Is it possible to add hyperlinks to the Report Manager Home Page that link to other aspx pages? I have created some associated data entry aspx pages that I want to enable access to from the Report Manager Home Page (if possible).

Thanks!

Why not just create an "index" aspx page that would have links to reports similar to report manager? You could have this index page in the same project as the associated data entry aspx pages.

It seems to me that you're thinking about it backwards.

|||

Thanks for the replies...

The end user is already fairly comfortable with the Report Manager front end, so I am kind of stuck with it as my launching point.

The only thing I have come up with is an "empty" report accessible from the main folder that only includes the hyperlinks for the "Data Entry" aspx pages. A little kludgy though.

|||Russell - I am not familiar with an IFRAME... could you elaborate a little more?sql

Non-Report Hyperlinks on Report Manager Home Page?

Is it possible to add hyperlinks to the Report Manager Home Page that link to other aspx pages? I have created some associated data entry aspx pages that I want to enable access to from the Report Manager Home Page (if possible).

Thanks!

It isn't...maybe you could run it inside an IFRAME, surrounded by other UI (including "your" hyperlinks)?|||

pittpanther wrote:

Is it possible to add hyperlinks to the Report Manager Home Page that link to other aspx pages? I have created some associated data entry aspx pages that I want to enable access to from the Report Manager Home Page (if possible).

Thanks!

Why not just create an "index" aspx page that would have links to reports similar to report manager? You could have this index page in the same project as the associated data entry aspx pages.

It seems to me that you're thinking about it backwards.

|||

Thanks for the replies...

The end user is already fairly comfortable with the Report Manager front end, so I am kind of stuck with it as my launching point.

The only thing I have come up with is an "empty" report accessible from the main folder that only includes the hyperlinks for the "Data Entry" aspx pages. A little kludgy though.

|||Russell - I am not familiar with an IFRAME... could you elaborate a little more?

Wednesday, March 21, 2012

Non-queried parameter giving 'Invalid Column Name' error

I have created a non-queried boolean parameter called HideOptionalColumns.
This field doesn't exist in my query result set. I use this value in the
Hidden attribute of the Visiblity property for the column to hide or display
certain columns based on the user's runtime selection. It works fine when I
View Report in VS - the columns are hiddden/displayed appropriately. But
when I run the report, I get the following error:
An error has occurred during report processing.
Query execution failed for data set 'myDset'.
Invalid column name 'HideOptionalColumns'.
This seems to imply that it does need that parameter in the result set
itself. But that configuration doesn't work since the rows are then filtered
out based on that parameter. Is there another way to accomplish this hiding
of columns?
Thank you!On Apr 30, 4:04 pm, marian <mar...@.discussions.microsoft.com> wrote:
> I have created a non-queried boolean parameter called HideOptionalColumns.
Instead of using a parameter, you could place another field/textbox
right in your table or matrix. For each column you want to hide, set
the Hide option to trigger from the field or text box you add. I use
this a lot and it works without any problems.
> This field doesn't exist in my query result set. I use this value in the
> Hidden attribute of the Visiblity property for the column to hide or display
> certain columns based on the user's runtime selection. It works fine when I
> View Report in VS - the columns are hiddden/displayed appropriately. But
> when I run the report, I get the following error:
> An error has occurred during report processing.
> Query execution failed for data set 'myDset'.
> Invalid column name 'HideOptionalColumns'.
> This seems to imply that it does need that parameter in the result set
> itself. But that configuration doesn't work since the rows are then filtered
> out based on that parameter. Is there another way to accomplish this hiding
> of columns?
> Thank you!sql

non-existent images

Hi,
I have a database that contains student-id's, and I created a report
that shows the student ID along with a fotograph of the student. The
picture is fetched from a website, and is not stored in the database.
Problem:
Not all students have a fotograph. If no picture exists, a red cross is
shown on the output. Can this red cross be eliminated, or can the
picture be hidden when there's an invalid URL for the image source?
Tnx,
E.T.You could create a small 1x1 pixel gif and upload to your server.
THen for the image tag, write an expression like
=IIF(IsNothing(Field!PictureURL.Value), "mysmallgif.gif",
field!PictureURL.Value)
That way you will display the gif if the url is invalid. You'll have to
tweak the code to fit your fields etc, but the concept should work.
Kaisa M. Lindahl Lervik
"Erik Thijs" <ErikdotThijs@.removethis.khkdotbe> wrote in message
news:esytuq38GHA.4964@.TK2MSFTNGP02.phx.gbl...
> Hi,
> I have a database that contains student-id's, and I created a report that
> shows the student ID along with a fotograph of the student. The picture is
> fetched from a website, and is not stored in the database.
> Problem:
> Not all students have a fotograph. If no picture exists, a red cross is
> shown on the output. Can this red cross be eliminated, or can the picture
> be hidden when there's an invalid URL for the image source?
> Tnx,
> E.T.|||Kaisa M. Lindahl Lervik schreef:
> You could create a small 1x1 pixel gif and upload to your server.
> THen for the image tag, write an expression like
> =IIF(IsNothing(Field!PictureURL.Value), "mysmallgif.gif",
> field!PictureURL.Value)
forgot to mention:
The url of the picture is generated in RS, not in the database, based on
the studentID.
For instance, a student has studentID stud0001, then the url for the
image in reporting services becomes:
Image1.Value="http://fotoserver/" & Fields!studentId.Value & ".jpg"
So every student has a picture with a URL value. The question is how to
detect invalid URL's...|||How do you store the images on the server? Do you upload images when you
register the data about your student, or do you have to upload the picture
after you know the StudentID?
If you do it when registering, you might be able to change your db to save
wether or not a picture has been uploaded.
If "studentpicturesaved = true", then Image1.Value = http://fotoserver &
Fields!studentID.Value & ".jpg" else Image1.Value =http://fotoserver.dummypic.gif
Or else you have to access the file server to check if a picture with that
ID is there, which is fussy at best. I'm pretty sure there's no way of
checking if a image source url is valid just before the image is being
rendered on the screen. You have to register if there's a picture in the
database.
Kaisa M. Lindahl Lervik
"Erik Thijs" <ErikdotThijs@.removethis.khkdotbe> wrote in message
news:OTt$zE48GHA.4348@.TK2MSFTNGP03.phx.gbl...
> Kaisa M. Lindahl Lervik schreef:
>> You could create a small 1x1 pixel gif and upload to your server.
>> THen for the image tag, write an expression like
>> =IIF(IsNothing(Field!PictureURL.Value), "mysmallgif.gif",
>> field!PictureURL.Value)
> forgot to mention:
> The url of the picture is generated in RS, not in the database, based on
> the studentID.
> For instance, a student has studentID stud0001, then the url for the image
> in reporting services becomes:
> Image1.Value="http://fotoserver/" & Fields!studentId.Value & ".jpg"
> So every student has a picture with a URL value. The question is how to
> detect invalid URL's...

Tuesday, March 20, 2012

None aggregation function

I have created a cube where I simply want to store non-aggregatable values. I have used the none aggregation function setting but can't seem to browse the leaf level and see any values after processing. If I set the aggregation function to sum , max or min, all works as expected. I assume that None should be used when you want to load non-aggreatable data into a cube. I would have expected to be able to see data at the leaf level, is this assumption incorrect? I'm using SQL 2005 SP1.

None aggregation function works, you probably simply are not browsing at the real leaves. It is not that simply to get to real leaves, since you need to position every single attribute included in the measure group to get there. For testing you can use Leaves() function (few more details can be found here: http://www.sqljunkies.com/WebLog/mosha/archive/2006/04/29/leaves.aspx), but with just a browser you will have hard time navigating to real leaves.|||

Mosha-

Thanks for the rapid response! You are correct I had a tough time getting to the leaves. Problem was... I used the upper filter area in the VS browser, rather that the filters directly on the top of the grid, and each view gave a differing response. I have put screen shots into a word doc to demonstrate but can't find a way to attach them to this post. At least my question is anwsered but I'm slighly concernerd that I got 2 different results based on the way I filtered the view.

Thanks Again

Chris

Monday, March 19, 2012

Non-Cluster Index PK

I have a client that has created Primary Keys with Non-Cluster indexes.
These Primary Keys are usually monotically increasing. Who much will this
slow down table joins using Primary Keys that Non-Cluster Indexes.
Please help me with this topic.
Thank You,
It really depends. What is clustered? Are foreign keys indexed? Are other
columns used in predicates indexed? Are they having performance problems at
all? Can you run some tests on a development server?
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
"Joe K." <Joe K.@.discussions.microsoft.com> wrote in message
news:A3D054C0-F6A4-4AAF-A217-F26B83079F0D@.microsoft.com...
> I have a client that has created Primary Keys with Non-Cluster indexes.
> These Primary Keys are usually monotically increasing. Who much will this
> slow down table joins using Primary Keys that Non-Cluster Indexes.
> Please help me with this topic.
> Thank You,
|||On Wed, 26 Oct 2005 09:21:05 -0700, Joe K. <Joe
K.@.discussions.microsoft.com> wrote:
>I have a client that has created Primary Keys with Non-Cluster indexes.
>These Primary Keys are usually monotically increasing. Who much will this
>slow down table joins using Primary Keys that Non-Cluster Indexes.
>Please help me with this topic.
Several answers posted to you yesterday.
J.

Non-Cluster Index PK

I have a client that has created Primary Keys with Non-Cluster indexes.
These Primary Keys are usually monotically increasing. Who much will this
slow down table joins using Primary Keys that Non-Cluster Indexes.
Please help me with this topic.
Thank You,It really depends. What is clustered? Are foreign keys indexed? Are other
columns used in predicates indexed? Are they having performance problems at
all? Can you run some tests on a development server?
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
--
"Joe K." <Joe K.@.discussions.microsoft.com> wrote in message
news:A3D054C0-F6A4-4AAF-A217-F26B83079F0D@.microsoft.com...
> I have a client that has created Primary Keys with Non-Cluster indexes.
> These Primary Keys are usually monotically increasing. Who much will this
> slow down table joins using Primary Keys that Non-Cluster Indexes.
> Please help me with this topic.
> Thank You,|||On Wed, 26 Oct 2005 09:21:05 -0700, Joe K. <Joe
K.@.discussions.microsoft.com> wrote:
>I have a client that has created Primary Keys with Non-Cluster indexes.
>These Primary Keys are usually monotically increasing. Who much will this
>slow down table joins using Primary Keys that Non-Cluster Indexes.
>Please help me with this topic.
Several answers posted to you yesterday.
J.

Monday, March 12, 2012

Non-Cluster Index PK

I have a client that has created Primary Keys with Non-Cluster indexes.
These Primary Keys are usually monotically increasing. Who much will this
slow down table joins using Primary Keys that Non-Cluster Indexes.
Please help me with this topic.
Thank You,It really depends. What is clustered? Are foreign keys indexed? Are other
columns used in predicates indexed? Are they having performance problems at
all? Can you run some tests on a development server?
Adam Machanic
SQL Server MVP
http://www.datamanipulation.net
--
"Joe K." <Joe K.@.discussions.microsoft.com> wrote in message
news:A3D054C0-F6A4-4AAF-A217-F26B83079F0D@.microsoft.com...
> I have a client that has created Primary Keys with Non-Cluster indexes.
> These Primary Keys are usually monotically increasing. Who much will this
> slow down table joins using Primary Keys that Non-Cluster Indexes.
> Please help me with this topic.
> Thank You,|||On Wed, 26 Oct 2005 09:21:05 -0700, Joe K. <Joe
K.@.discussions.microsoft.com> wrote:
>I have a client that has created Primary Keys with Non-Cluster indexes.
>These Primary Keys are usually monotically increasing. Who much will this
>slow down table joins using Primary Keys that Non-Cluster Indexes.
>Please help me with this topic.
Several answers posted to you yesterday.
J.

Friday, March 9, 2012

non clustered index on heap

What would be the implications in terms performance and i/o operations, if a non clustered index created on heap ie. without a clustered index on table.It depends. There is no straight-forward answer. It depends on the schema, row size etc. Generally speaking, it is recommended to have a clustered index on every table especially so for large tables. There are however cases where you can get the best bulk insert performance by inserting into a heap vs clustered index. Also the more indexes you have on a table slower the performance. Search in MSDN for the whitepaper of bulk load that should give some ideas on one aspect of this problem.

Wednesday, March 7, 2012

nodes does not seem to work when xml value is created through a select

Dear all,
While
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT (select @.x).query('//a')
works fine, a very similar query
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT t.c.query('.') from (select @.x).nodes('//a') t(c)
fails with
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near '.'.
Anyone knows whether this is expected/documented behaviour?
Thanks
PeterDerived table syntax in FROM clause requires table alias to follow it
(unlike scalar sub-query in SELECT).
You can achieve what you want with CROSS APPLY:
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT t2.c.query('.') from (select @.x) t1(x) CROSS APPLY t1.x.nodes('//a')
t2(c)
Best regards,
Eugene
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Peter" <pgp.coppens@.gmail.com> wrote in message
news:1138820267.235233.39270@.f14g2000cwb.googlegroups.com...
> Dear all,
> While
> DECLARE @.x xml
> SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
> SELECT (select @.x).query('//a')
> works fine, a very similar query
> DECLARE @.x xml
> SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
> SELECT t.c.query('.') from (select @.x).nodes('//a') t(c)
> fails with
> Msg 102, Level 15, State 1, Line 3
> Incorrect syntax near '.'.
> Anyone knows whether this is expected/documented behaviour?
> Thanks
> Peter
>|||That helps,
Thanks,
Peter.

nodes does not seem to work when xml value is created through a select

Dear all,
While
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT (select @.x).query('//a')
works fine, a very similar query
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT t.c.query('.') from (select @.x).nodes('//a') t(c)
fails with
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near '.'.
Anyone knows whether this is expected/documented behaviour?
Thanks
Peter
Derived table syntax in FROM clause requires table alias to follow it
(unlike scalar sub-query in SELECT).
You can achieve what you want with CROSS APPLY:
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT t2.c.query('.') from (select @.x) t1(x) CROSS APPLY t1.x.nodes('//a')
t2(c)
Best regards,
Eugene
This posting is provided "AS IS" with no warranties, and confers no rights.
"Peter" <pgp.coppens@.gmail.com> wrote in message
news:1138820267.235233.39270@.f14g2000cwb.googlegro ups.com...
> Dear all,
> While
> DECLARE @.x xml
> SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
> SELECT (select @.x).query('//a')
> works fine, a very similar query
> DECLARE @.x xml
> SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
> SELECT t.c.query('.') from (select @.x).nodes('//a') t(c)
> fails with
> Msg 102, Level 15, State 1, Line 3
> Incorrect syntax near '.'.
> Anyone knows whether this is expected/documented behaviour?
> Thanks
> Peter
>
|||That helps,
Thanks,
Peter.

Saturday, February 25, 2012

nocheck when creating a constraint

i was looking at a database creation script and i have a bunch of
constraints being created with a nocheck on them. i think i know what
the nocheck does (don't check for data issues). i'm not sure why the
original coder would define them like this since there is no data to
begin with when the tables are built. am i missing something? does
this hurt the database table?
ALTER TABLE [dbo].[mytable] WITH NOCHECK ADD
CONSTRAINT [PK_mytable] PRIMARY KEY ([mytable_id]) ON [PRIMARY]That seems to be a stupid thing to do. You won't gain anything, and possibly just have a non-trusted
constraint which can lead to worse performance (non-trusted will limit some of the optimizations
that can be done).
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Derek" <gepetto_2000@.yahoo.com> wrote in message
news:1178199834.825140.114810@.y5g2000hsa.googlegroups.com...
> i was looking at a database creation script and i have a bunch of
> constraints being created with a nocheck on them. i think i know what
> the nocheck does (don't check for data issues). i'm not sure why the
> original coder would define them like this since there is no data to
> begin with when the tables are built. am i missing something? does
> this hurt the database table?
>
> ALTER TABLE [dbo].[mytable] WITH NOCHECK ADD
> CONSTRAINT [PK_mytable] PRIMARY KEY ([mytable_id]) ON [PRIMARY]
>|||Derek,
See this blog from SQL Server MVP Hugo Kornelis.
Can you trust your constraints?
http://sqlblog.com/blogs/hugo_kornelis/archive/2007/03/29/can-you-trust-your-constraints.aspx
AMB
"Derek" wrote:
> i was looking at a database creation script and i have a bunch of
> constraints being created with a nocheck on them. i think i know what
> the nocheck does (don't check for data issues). i'm not sure why the
> original coder would define them like this since there is no data to
> begin with when the tables are built. am i missing something? does
> this hurt the database table?
>
> ALTER TABLE [dbo].[mytable] WITH NOCHECK ADD
> CONSTRAINT [PK_mytable] PRIMARY KEY ([mytable_id]) ON [PRIMARY]
>

nocheck when creating a constraint

i was looking at a database creation script and i have a bunch of
constraints being created with a nocheck on them. i think i know what
the nocheck does (don't check for data issues). i'm not sure why the
original coder would define them like this since there is no data to
begin with when the tables are built. am i missing something? does
this hurt the database table?
ALTER TABLE [dbo].[mytable] WITH NOCHECK ADD
CONSTRAINT [PK_mytable] PRIMARY KEY ([mytable_id]) ON [PRIMAR
Y]That seems to be a stupid thing to do. You won't gain anything, and possibly
just have a non-trusted
constraint which can lead to worse performance (non-trusted will limit some
of the optimizations
that can be done).
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://sqlblog.com/blogs/tibor_karaszi
"Derek" <gepetto_2000@.yahoo.com> wrote in message
news:1178199834.825140.114810@.y5g2000hsa.googlegroups.com...
> i was looking at a database creation script and i have a bunch of
> constraints being created with a nocheck on them. i think i know what
> the nocheck does (don't check for data issues). i'm not sure why the
> original coder would define them like this since there is no data to
> begin with when the tables are built. am i missing something? does
> this hurt the database table?
>
> ALTER TABLE [dbo].[mytable] WITH NOCHECK ADD
> CONSTRAINT [PK_mytable] PRIMARY KEY ([mytable_id]) ON [P
RIMARY]
>|||Derek,
See this blog from SQL Server MVP Hugo Kornelis.
Can you trust your constraints?
http://sqlblog.com/blogs/hugo_korne.../>
raints.aspx
AMB
"Derek" wrote:

> i was looking at a database creation script and i have a bunch of
> constraints being created with a nocheck on them. i think i know what
> the nocheck does (don't check for data issues). i'm not sure why the
> original coder would define them like this since there is no data to
> begin with when the tables are built. am i missing something? does
> this hurt the database table?
>
> ALTER TABLE [dbo].[mytable] WITH NOCHECK ADD
> CONSTRAINT [PK_mytable] PRIMARY KEY ([mytable_id]) ON [
PRIMARY]
>

No. of User Accounts in SQL SERVER 2000 and SQL SERVER 2005 ..plz

I want to know the maximum number of users that can be created in SQL SERVER 2005 and the previous version...?

Yorker:

I am going to assume that you will hit the limitations of the "uid" field. This field is a SMALLINT datatype; however, it appears that groups begin @. 16384 and that this field must be non-negative. Therefore, it appears that a user database MIGHT be limited to 16384 distinct users. However, on a serverwide database you can have many databases therefore leading to at least N x 16384 distinct users. There does not look to be any "variable size" limitation on the LOGIN side because this is based now on SIDs whereas in much older versions of sql server it was formerly based on "SUID"s -- The cardinality of the number of SIDs appears to be enormous.

Would someone please check me on this?


Dave

|||The maximum number of logins in 2000 was 35,365, in 2005 it was increased to 65,000 (I think).

However, I would HIGHLY recommend using windows authentication and group membership for rights, instead of 35,000+ SQL logins.|||any evidence to prove your numbers... ? Gentleman ?|||See this: http://msdn2.microsoft.com/en-us/library/ms187376.aspx

This says in 2005 "you can create more than 32,767 users". I was slightly off with my number in 2000. I thought I read somewhere else about the 64k limit in 2005, but I might be mistaken.

I still it is a nighmare to manage 32k logins.|||From http://msdn2.microsoft.com/en-us/library/ms187376.aspx, it infers that SQL2K can only have 32767 SQL users, but SQL2K5 can have more. How much more, I don't know off-hand.

But this doesn't say how many logins there could be (although I would guess it would be the same). http://msdn2.microsoft.com/en-us/library/ms174355.aspx doesn't give many clues.

I would strongly recommend using Active Directory, where the limit doesn't actually apply in the same way. Plus, do you really want to have your DBAs managing this type of thing?

Rob|||But... it turns out that http://support.microsoft.com/default.aspx/kb/303879 quite clearly states that SQL2000 can only have 16K users.

But I guess the types used in sys.sysusers will cater for more.|||

I think that is refering to "database logins" and not "server logins".

The other one does say "This is because, in SQL Server 2005, you can create more than 32,767 users, groups, and roles, and 32,767 data types." This kind of implies, users/groups/roles use the same id counter, so combined you can't have more than 32,767.

MS added more than 16k of users for a reason, although I find it hard to believe someone requested that to be added because they ran out of logins. I have a hard enough time managing 100 SQL logins (excluding AD Groups), I can't imagine the headache I would have with more than that. :)