Showing posts with label select. Show all posts
Showing posts with label select. Show all posts

Monday, March 26, 2012

Normal view compare to partitioned view

Hi,
I have one million records in one table, My data
contains all the records from for eg. January to December.
I use normal view to select out certain period of
records, for eg. January. then i query the records use
the remote view from foxpro to filter up the records from
this view. Does it make my data selection faster?
Or, do i need to separate out my data into few
tables and then use partitioned view to query the data
out?
Which one is faster?
Thanks.
regards,
florence
Hi
1'000'000 rows does not justify Partitioned Views. Once you get into the
1000's of millions it becomes justified (and then you nedd to put each table
on a seperate disk volume to make it perform). Rather make sure that you have
correct indexes and that you configure your hardware correctly.
Regards
Mike
"florencelee" wrote:

> Hi,
> I have one million records in one table, My data
> contains all the records from for eg. January to December.
> I use normal view to select out certain period of
> records, for eg. January. then i query the records use
> the remote view from foxpro to filter up the records from
> this view. Does it make my data selection faster?
> Or, do i need to separate out my data into few
> tables and then use partitioned view to query the data
> out?
> Which one is faster?
> Thanks.
> regards,
> florence
>

Normal view compare to partitioned view

Hi,
I have one million records in one table, My data
contains all the records from for eg. January to December.
I use normal view to select out certain period of
records, for eg. January. then i query the records use
the remote view from foxpro to filter up the records from
this view. Does it make my data selection faster?
Or, do i need to separate out my data into few
tables and then use partitioned view to query the data
out?
Which one is faster?
Thanks.
regards,
florenceHi
1'000'000 rows does not justify Partitioned Views. Once you get into the
1000's of millions it becomes justified (and then you nedd to put each table
on a seperate disk volume to make it perform). Rather make sure that you have
correct indexes and that you configure your hardware correctly.
Regards
Mike
"florencelee" wrote:
> Hi,
> I have one million records in one table, My data
> contains all the records from for eg. January to December.
> I use normal view to select out certain period of
> records, for eg. January. then i query the records use
> the remote view from foxpro to filter up the records from
> this view. Does it make my data selection faster?
> Or, do i need to separate out my data into few
> tables and then use partitioned view to query the data
> out?
> Which one is faster?
> Thanks.
> regards,
> florence
>

Friday, March 23, 2012

Noob question: SELECT statement

Hi,
I'm very new to RS. I'm trying to find a single piece of data in a table. Do I need to create a seperate dataset for it or is there some way that I can insert a SELECT statement into a textbox? Maybe a sub query of an existing dataset?
Thanks in advance.
WOW!
To date this has had 111 views and ZERO posts. Can noone help me?
|||No you can't do this. Your select statement must populate a dataset and return the single value if thats what you want. You can also used a stored procedure to return the value into the dataset.

If its a value from an dataset that is already loaded you may beable to select which row in the dataset to take the value from. I am not 100% sure on how to do that one tough.

Hope that helps
Martin

non-sa users get access denied in OLE DB query

On SQL Server 2005, 64 bit, this query:
SELECT * FROM OPENROWSET('IBMDADB2', 'DSN=DB2T;', 'SELECT SCAC_CODE,
SCAC_NAME,SCAC_CITY, SCAC_ST, SCAC_STATUS FROM
MISDB2A.FASMSCAC_STD_CARRIER_CODE')
Gets this message:
Msg 7415, Level 16, State 1, Line 1
Ad hoc access to OLE DB provider 'IBMDADB2' has been denied. You must access
this provider through a linked server.
I works if it is run by the sa login, or by a login that has been granted
the sysadmin Server Role.
How can I permission non-sa logins to access the OLE DB provider?Ad-hoc remote queries are disabled by default in SQL 2005. Check the
"Surface Area Configuration for Features (Ad Hoc Remote Queries) - Database
Engine " topic in BOL
(ms-help://MS.SQLCC.v9/MS.SQLSVR.v9.en/uirfsql9/html/fe754945-2cd2-4cc1-8ae1
-4de01907b382.htm).
Dejan Sarka
http://www.solidqualitylearning.com/blogs/
"Stu" <Stu@.discussions.microsoft.com> wrote in message
news:0896B540-3262-42A8-8D4E-97D900C35A91@.microsoft.com...
> On SQL Server 2005, 64 bit, this query:
> SELECT * FROM OPENROWSET('IBMDADB2', 'DSN=DB2T;', 'SELECT SCAC_CODE,
> SCAC_NAME,SCAC_CITY, SCAC_ST, SCAC_STATUS FROM
> MISDB2A.FASMSCAC_STD_CARRIER_CODE')
> Gets this message:
> Msg 7415, Level 16, State 1, Line 1
> Ad hoc access to OLE DB provider 'IBMDADB2' has been denied. You must
> access
> this provider through a linked server.
> I works if it is run by the sa login, or by a login that has been granted
> the sysadmin Server Role.
> How can I permission non-sa logins to access the OLE DB provider?|||It has been turned on!
With it left off, EVERYBODY gets this mssage:
Msg 15281, Level 16, State 1, Line 1
SQL Server blocked access to STATEMENT 'OpenRowset/OpenDatasource' of
component 'Ad Hoc Distributed Queries' because this component is turned off
as part of the security configuration for this server. A system administrato
r
can enable the use of 'Ad Hoc Distributed Queries' by using sp_configure. Fo
r
more information about enabling 'Ad Hoc Distributed Queries', see "Surface
Area Configuration" in SQL Server Books Online.
But turned on, uness you have sysadmin privs, you get this message:
Msg 7415, Level 16, State 1, Line 1
Ad hoc access to OLE DB provider 'IBMDADB2' has been denied. You must access
this provider through a linked server.
So, with ad hoc turned on, how do you permission non-sa logins to access the
OLE DB provider?|||Stu (Stu@.discussions.microsoft.com) writes:
> But turned on, uness you have sysadmin privs, you get this message:
> Msg 7415, Level 16, State 1, Line 1
> Ad hoc access to OLE DB provider 'IBMDADB2' has been denied. You must
> access this provider through a linked server.
> So, with ad hoc turned on, how do you permission non-sa logins to access
> the OLE DB provider?
Books Online gives us this little tidbit:
OPENROWSET can be used to access remote data from OLE DB data sources
only when the DisallowAdhocAccess registry option is explicitly set to
0 for the specified provider, and the Ad Hoc Distributed Queries
advanced configuration option is enabled. When these options are not
set, the default behavior does not allow for ad hoc access.
i search the registry for DisallowAdhocAccess and I found that under
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Mi
crosoft SQL Server\MSSQL.1\Providers
there is a whole list of providers. Of these SQLNLCI has DisallowAdhocAccess
set to 0, but the others not. I did a test with a plain-test user, and
it was able to use OPENROWSET with SQLNCLI, but not with MSDASQL that
did not have DisallowAdhocAccess listed.
So that seems to be the key.
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||> i search the registry for DisallowAdhocAccess and I found that under
> HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Mi
crosoft SQL
> Server\MSSQL.1\Providers
> there is a whole list of providers. Of these SQLNLCI has
> DisallowAdhocAccess
> set to 0, but the others not. I did a test with a plain-test user, and
> it was able to use OPENROWSET with SQLNCLI, but not with MSDASQL that
> did not have DisallowAdhocAccess listed.
Yes, you nailed it. In BOL, it also stands "This option controls the ability
of non-administrators to run ad hoc queries. Administrators are not affected
by this option." In addition, there is a funny thing: you can set this
option to 1 through SSMS (Server Objects - Linked Servers - Providers -
properties of a specific provider). When you check the checkbox, SSMS
creates the registry key. However, you cannot set it to 0 with SSMS. When
you uncheck the checkbox, SSMS deletes the key. So the only option to set it
to 0 is with registry editor (ok, or with SMO, I guess). Not very
recommendable.
Dejan Sarka
http://www.solidqualitylearning.com/blogs/|||Dejan Sarka (dejan_please_reply_to_newsgroups.sarka@.avtenta.si) writes:
> Yes, you nailed it. In BOL, it also stands "This option controls the
> ability of non-administrators to run ad hoc queries. Administrators are
> not affected by this option." In addition, there is a funny thing: you
> can set this option to 1 through SSMS (Server Objects - Linked Servers -
> Providers - properties of a specific provider). When you check the
> checkbox, SSMS creates the registry key. However, you cannot set it to 0
> with SSMS. When you uncheck the checkbox, SSMS deletes the key. So the
> only option to set it to 0 is with registry editor (ok, or with SMO, I
> guess).
Well, Mgmt Studio uses SMO, so I would not put my bets on that one.
In my opinion this is a bug. Someone has apparently not understood how
this provider option works. I've submitted
https://connect.microsoft.com/SQLSe...=2574
94
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server 2005 at
http://www.microsoft.com/technet/pr...oads/books.mspx
Books Online for SQL Server 2000 at
http://www.microsoft.com/sql/prodin...ions/books.mspx|||I'm working now.
Thanks guys, for your research and feedback!|||Voted:-)
Dejan Sarka
http://www.solidqualitylearning.com/blogs/
"Erland Sommarskog" <esquel@.sommarskog.se> wrote in message
news:Xns98D4B72B433ADYazorman@.127.0.0.1...
> Dejan Sarka (dejan_please_reply_to_newsgroups.sarka@.avtenta.si) writes:
> Well, Mgmt Studio uses SMO, so I would not put my bets on that one.
> In my opinion this is a bug. Someone has apparently not understood how
> this provider option works. I've submitted
> https://connect.microsoft.com/SQLSe...=25
7494
>
> --
> Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
> Books Online for SQL Server 2005 at
> http://www.microsoft.com/technet/pr...oads/books.mspx
> Books Online for SQL Server 2000 at
> http://www.microsoft.com/sql/prodin...ions/books.mspx

Wednesday, March 21, 2012

non-logged operations

SQL 7.0 SP1 on NT 4 SP6a Cluster.
We have a DB with the option "select into/bulk copy" set to true.
When non-logged operations happens, the backup of the transaction log fail, so we do a differential backup instead and everything is ok.
My questions are:

UPGRADING to SQL 2K,

1 Do we have to choose the "Bulk-Logged" recovery model?
and if so,
2 Do we still need to run differential backup because backup of t-log will fail as on SQL 7.0?
Can somebody help me with this?
TIA.
Franco
:cool:In BULK_LOGGED recovery mode certain bulk operations are only minimally logged increasing performance and decreasing log size. Performing builk operations do not require you to process backup, since they are actually logged thoughout BCM page. For every datafile there is one BCM page where one bit correspond to extent modification by minimally logged operation. When you perform tran log backup, sql scans BCM and backups these extents. As a result you are getting larger tranlog backups then in FULL mode.

HTH,
OBRP

Non-logged insert... select

Hi,

Is there any way in either sql2k or 2005 to do an "insert ... select" without writing to the tran log, similar to a non-logged bcp from a flat file? It seems crazy that I should have to go outside the engine and back in again just to avoid this logging. Or am I mistaken, and this is already what happens when you do an insert... select when the db is in bulk logged or simple mode?

Thanks!

Mike

Hey Mike. You can make use of the 'select...into' statement to perform this type of operation in bulk, of course that means the target table can't already exist. There is no way to perform an 'insert' statement with minimal logging. The only options you have for performing minimal logged operations in SQL 2000 are the following commands:

- BULK INSERT
- bcp
- select...into
- create index
- text and image operations (updatetext and writetext)

There are also prerequisites for minimally logged operations, including:

- Table not being replicated
- Table level locking is specified/used
- Specific index considerations (different depending on types of indexes present, whether or not data exists in the table, etc...see BOL for more information)

In SQL 2005, you have the option of using any of the above, and also the new 'BULK' option with the OPENROWSET command to load data from a text file using minimal logging. I suspect this may be your best option if you are running SQL 2005 and loading data into an existing table. The prerequisites for bulk loading still apply though. For more information on this new 2005 bulk operation, see BOL.

Also in SQL 2005, you could use one of the bulk load commands to bulk load data into a new table using minimal logging, then switch that table's data into an existing table if you make use of table partitioning correctly...the switch into the existing table's partition scheme could be a meta-data only operation if you design the partitioning strategy correctly. See SQL 2005 BOL for more information on table partitioning, or just post another question here and we'll see what we can get you for pointers.

HTH,

|||

Can I use the select...into and try to partition the table?

Such as:

select top(1000)*

into dbo.create_on_the_fly_tbl

from dbo.any_existing_tbl

on partitionrangeSCM(id)

Non-logged insert... select

Hi,

Is there any way in either sql2k or 2005 to do an "insert ... select" without writing to the tran log, similar to a non-logged bcp from a flat file? It seems crazy that I should have to go outside the engine and back in again just to avoid this logging. Or am I mistaken, and this is already what happens when you do an insert... select when the db is in bulk logged or simple mode?

Thanks!

Mike

Hey Mike. You can make use of the 'select...into' statement to perform this type of operation in bulk, of course that means the target table can't already exist. There is no way to perform an 'insert' statement with minimal logging. The only options you have for performing minimal logged operations in SQL 2000 are the following commands:

- BULK INSERT
- bcp
- select...into
- create index
- text and image operations (updatetext and writetext)

There are also prerequisites for minimally logged operations, including:

- Table not being replicated
- Table level locking is specified/used
- Specific index considerations (different depending on types of indexes present, whether or not data exists in the table, etc...see BOL for more information)

In SQL 2005, you have the option of using any of the above, and also the new 'BULK' option with the OPENROWSET command to load data from a text file using minimal logging. I suspect this may be your best option if you are running SQL 2005 and loading data into an existing table. The prerequisites for bulk loading still apply though. For more information on this new 2005 bulk operation, see BOL.

Also in SQL 2005, you could use one of the bulk load commands to bulk load data into a new table using minimal logging, then switch that table's data into an existing table if you make use of table partitioning correctly...the switch into the existing table's partition scheme could be a meta-data only operation if you design the partitioning strategy correctly. See SQL 2005 BOL for more information on table partitioning, or just post another question here and we'll see what we can get you for pointers.

HTH,

|||

Can I use the select...into and try to partition the table?

Such as:

select top(1000)*

into dbo.create_on_the_fly_tbl

from dbo.any_existing_tbl

on partitionrangeSCM(id)

nonindexed query & mem usage

1) for a given table that has around 26869141 rows, a query against a
nonindexed column such as << select * from table where somenonindexedcol =
25 >> seems to take a lot of memory as it runs. it must have to do a table
scan and store the results in memory, correct?
2) sql server never releases that memory to the os i think (even with a
max memory limit set)?
3) plus the query has the ability to go past that max limit as well?
4) the solution is to create a nonclustered index on that col, correct?
5) is it ok if i add more than one column to that nonclustered index?
im a developer, so your patience is appreciated.
thx
On Sat, 17 Dec 2005 00:18:12 GMT, usenetjb
<usenet.20.jimbo-black@.antichef.net> wrote:
>1) for a given table that has around 26869141 rows, a query against a
>nonindexed column such as << select * from table where somenonindexedcol =
>25 >> seems to take a lot of memory as it runs. it must have to do a table
>scan and store the results in memory, correct?
Well, if it's *around* 26m, then yes.
It doesn't store the results in memory, but as long as it has to read
them into memory to look at momentarily, it uses available memory to
cache them, just in case you ask for them again soon.

>2) sql server never releases that memory to the os i think (even with a
>max memory limit set)?
It will release it depending on competing demands.

>3) plus the query has the ability to go past that max limit as well?
If you mean, can your table be larger than your RAM, certainly yes!

>4) the solution is to create a nonclustered index on that col, correct?
Maybe.
It will help if the statistics on the distribution of values is such
that SQLServer thinks it's going to be cheaper to use the index. This
is generally the case if you're going to return just a few rows. But
if your query is going to return more than about 20% of the table, an
index won't help, scans are generally much faster. If the values
being selected are all contiguous, a clustered index may still be
faster than a complete scan. As always, "it depends" is applicable.

>5) is it ok if i add more than one column to that nonclustered index?
OK by me.
SQLServer only remembers the statistics for the first column, and an
index on two columns is less efficient at fetching values selected by
only one column, but if your query is by two fields, putting an index
on both at once is called a "covering index" and becomes an excellent
idea!

>im a developer, so your patience is appreciated.
I hear that.
J.
|||jxstern <jxstern@.nowhere.xyz> wrote in
news:t1o6q19libh9i8cucou27cgce5254e52p3@.4ax.com:

> On Sat, 17 Dec 2005 00:18:12 GMT, usenetjb
> <usenet.20.jimbo-black@.antichef.net> wrote:
thx just to confirm

> Well, if it's *around* 26m, then yes.
> It doesn't store the results in memory, but as long as it has to read
> them into memory to look at momentarily, it uses available memory to
> cache them, just in case you ask for them again soon.
yeah i understand. that explains why i saw the memory usage go way up
after that query ran.

> It will release it depending on competing demands.
well, i saw the mem usage go way up to aroun 1.5 gigs, and never really
went down to the pre-query state around 650M. so maybe i'm
misunderstanding something here. i would think that it would go back down
to around 65om after a bit.

> Maybe.
> It will help if the statistics on the distribution of values is such
> that SQLServer thinks it's going to be cheaper to use the index. This
> is generally the case if you're going to return just a few rows. But
> if your query is going to return more than about 20% of the table, an
> index won't help, scans are generally much faster. If the values
> being selected are all contiguous, a clustered index may still be
> faster than a complete scan. As always, "it depends" is applicable.
it will return probably around 3 to 90 rows at the max. so the index will
probably help me out most in this situation.

> OK by me.
> SQLServer only remembers the statistics for the first column, and an
> index on two columns is less efficient at fetching values selected by
> only one column, but if your query is by two fields, putting an index
> on both at once is called a "covering index" and becomes an excellent
> idea!
so if i have 2 queries that use 2 diff cols, then its better to create 2
seperate nonclustered indexes?
thx much
|||Some comments inline:
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"usenetjb" <usenet.20.jimbo-black@.antichef.net> wrote in message
news:Xns972EE3850DCF4usenetjb@.207.115.17.102...
> jxstern <jxstern@.nowhere.xyz> wrote in
> news:t1o6q19libh9i8cucou27cgce5254e52p3@.4ax.com:
>
> thx just to confirm
>
> yeah i understand. that explains why i saw the memory usage go way up
> after that query ran.
>
> well, i saw the mem usage go way up to aroun 1.5 gigs, and never really
> went down to the pre-query state around 650M. so maybe i'm
> misunderstanding something here. i would think that it would go back down
> to around 65om after a bit.
Why? How would SQL Server know that you won't refer to those particular pages soon again? The
purpose of caching data is for the data to be in cache next tome someone refers to the page. SQL
Server will not release pages *unless the machine is memory constrained*. See:
http://support.microsoft.com/default...;en-us;q321363
http://www.mssqlserver.com/faq/troub...memoryleak.asp

>
> it will return probably around 3 to 90 rows at the max. so the index will
> probably help me out most in this situation.
>
> so if i have 2 queries that use 2 diff cols, then its better to create 2
> seperate nonclustered indexes?
Possibly. You create indexes to support your queries. In order to comment what is best for you we
would know what the queries you want to support look like.

> thx much

nonindexed query & mem usage

1) for a given table that has around 26869141 rows, a query against a
nonindexed column such as << select * from table where somenonindexedcol =
25 >> seems to take a lot of memory as it runs. it must have to do a table
scan and store the results in memory, correct?
2) sql server never releases that memory to the os i think (even with a
max memory limit set)?
3) plus the query has the ability to go past that max limit as well?
4) the solution is to create a nonclustered index on that col, correct?
5) is it ok if i add more than one column to that nonclustered index?
im a developer, so your patience is appreciated.
thxOn Sat, 17 Dec 2005 00:18:12 GMT, usenetjb
<usenet.20.jimbo-black@.antichef.net> wrote:
>1) for a given table that has around 26869141 rows, a query against a
>nonindexed column such as << select * from table where somenonindexedcol =
>25 >> seems to take a lot of memory as it runs. it must have to do a table
>scan and store the results in memory, correct?
Well, if it's *around* 26m, then yes.
It doesn't store the results in memory, but as long as it has to read
them into memory to look at momentarily, it uses available memory to
cache them, just in case you ask for them again soon.

>2) sql server never releases that memory to the os i think (even with a
>max memory limit set)?
It will release it depending on competing demands.

>3) plus the query has the ability to go past that max limit as well?
If you mean, can your table be larger than your RAM, certainly yes!

>4) the solution is to create a nonclustered index on that col, correct?
Maybe.
It will help if the statistics on the distribution of values is such
that SQLServer thinks it's going to be cheaper to use the index. This
is generally the case if you're going to return just a few rows. But
if your query is going to return more than about 20% of the table, an
index won't help, scans are generally much faster. If the values
being selected are all contiguous, a clustered index may still be
faster than a complete scan. As always, "it depends" is applicable.

>5) is it ok if i add more than one column to that nonclustered index?
OK by me.
SQLServer only remembers the statistics for the first column, and an
index on two columns is less efficient at fetching values selected by
only one column, but if your query is by two fields, putting an index
on both at once is called a "covering index" and becomes an excellent
idea!

>im a developer, so your patience is appreciated.
I hear that.
J.|||jxstern <jxstern@.nowhere.xyz> wrote in
news:t1o6q19libh9i8cucou27cgce5254e52p3@.
4ax.com:

> On Sat, 17 Dec 2005 00:18:12 GMT, usenetjb
> <usenet.20.jimbo-black@.antichef.net> wrote:
thx just to confirm

> Well, if it's *around* 26m, then yes.
> It doesn't store the results in memory, but as long as it has to read
> them into memory to look at momentarily, it uses available memory to
> cache them, just in case you ask for them again soon.
yeah i understand. that explains why i saw the memory usage go way up
after that query ran.

> It will release it depending on competing demands.
well, i saw the mem usage go way up to aroun 1.5 gigs, and never really
went down to the pre-query state around 650M. so maybe i'm
misunderstanding something here. i would think that it would go back down
to around 65om after a bit.

> Maybe.
> It will help if the statistics on the distribution of values is such
> that SQLServer thinks it's going to be cheaper to use the index. This
> is generally the case if you're going to return just a few rows. But
> if your query is going to return more than about 20% of the table, an
> index won't help, scans are generally much faster. If the values
> being selected are all contiguous, a clustered index may still be
> faster than a complete scan. As always, "it depends" is applicable.
it will return probably around 3 to 90 rows at the max. so the index will
probably help me out most in this situation.

> OK by me.
> SQLServer only remembers the statistics for the first column, and an
> index on two columns is less efficient at fetching values selected by
> only one column, but if your query is by two fields, putting an index
> on both at once is called a "covering index" and becomes an excellent
> idea!
so if i have 2 queries that use 2 diff cols, then its better to create 2
seperate nonclustered indexes?
thx much|||Some comments inline:
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"usenetjb" <usenet.20.jimbo-black@.antichef.net> wrote in message
news:Xns972EE3850DCF4usenetjb@.207.115.17.102...
> jxstern <jxstern@.nowhere.xyz> wrote in
> news:t1o6q19libh9i8cucou27cgce5254e52p3@.
4ax.com:
>
> thx just to confirm
>
> yeah i understand. that explains why i saw the memory usage go way up
> after that query ran.
>
> well, i saw the mem usage go way up to aroun 1.5 gigs, and never really
> went down to the pre-query state around 650M. so maybe i'm
> misunderstanding something here. i would think that it would go back down
> to around 65om after a bit.
Why? How would SQL Server know that you won't refer to those particular page
s soon again? The
purpose of caching data is for the data to be in cache next tome someone ref
ers to the page. SQL
Server will not release pages *unless the machine is memory constrained*. Se
e:
http://support.microsoft.com/defaul...b;en-us;q321363
http://www.mssqlserver.com/faq/trou...-memoryleak.asp

>
> it will return probably around 3 to 90 rows at the max. so the index will
> probably help me out most in this situation.
>
> so if i have 2 queries that use 2 diff cols, then its better to create 2
> seperate nonclustered indexes?
Possibly. You create indexes to support your queries. In order to comment wh
at is best for you we
would know what the queries you want to support look like.

> thx muchsql

nonindexed query & mem usage

1) for a given table that has around 26869141 rows, a query against a
nonindexed column such as << select * from table where somenonindexedcol = 25 >> seems to take a lot of memory as it runs. it must have to do a table
scan and store the results in memory, correct?
2) sql server never releases that memory to the os i think (even with a
max memory limit set)?
3) plus the query has the ability to go past that max limit as well?
4) the solution is to create a nonclustered index on that col, correct?
5) is it ok if i add more than one column to that nonclustered index?
im a developer, so your patience is appreciated.
thxOn Sat, 17 Dec 2005 00:18:12 GMT, usenetjb
<usenet.20.jimbo-black@.antichef.net> wrote:
>1) for a given table that has around 26869141 rows, a query against a
>nonindexed column such as << select * from table where somenonindexedcol =>25 >> seems to take a lot of memory as it runs. it must have to do a table
>scan and store the results in memory, correct?
Well, if it's *around* 26m, then yes. :)
It doesn't store the results in memory, but as long as it has to read
them into memory to look at momentarily, it uses available memory to
cache them, just in case you ask for them again soon.
>2) sql server never releases that memory to the os i think (even with a
>max memory limit set)?
It will release it depending on competing demands.
>3) plus the query has the ability to go past that max limit as well?
If you mean, can your table be larger than your RAM, certainly yes!
>4) the solution is to create a nonclustered index on that col, correct?
Maybe.
It will help if the statistics on the distribution of values is such
that SQLServer thinks it's going to be cheaper to use the index. This
is generally the case if you're going to return just a few rows. But
if your query is going to return more than about 20% of the table, an
index won't help, scans are generally much faster. If the values
being selected are all contiguous, a clustered index may still be
faster than a complete scan. As always, "it depends" is applicable.
>5) is it ok if i add more than one column to that nonclustered index?
OK by me.
SQLServer only remembers the statistics for the first column, and an
index on two columns is less efficient at fetching values selected by
only one column, but if your query is by two fields, putting an index
on both at once is called a "covering index" and becomes an excellent
idea!
>im a developer, so your patience is appreciated.
I hear that.
J.|||jxstern <jxstern@.nowhere.xyz> wrote in
news:t1o6q19libh9i8cucou27cgce5254e52p3@.4ax.com:
> On Sat, 17 Dec 2005 00:18:12 GMT, usenetjb
> <usenet.20.jimbo-black@.antichef.net> wrote:
thx just to confirm
>>1) for a given table that has around 26869141 rows, a query against a
>>nonindexed column such as << select * from table where
>>somenonindexedcol = 25 >> seems to take a lot of memory as it runs.
>>it must have to do a table scan and store the results in memory,
>>correct?
> Well, if it's *around* 26m, then yes. :)
> It doesn't store the results in memory, but as long as it has to read
> them into memory to look at momentarily, it uses available memory to
> cache them, just in case you ask for them again soon.
yeah i understand. that explains why i saw the memory usage go way up
after that query ran.
>>2) sql server never releases that memory to the os i think (even with
>>a max memory limit set)?
> It will release it depending on competing demands.
well, i saw the mem usage go way up to aroun 1.5 gigs, and never really
went down to the pre-query state around 650M. so maybe i'm
misunderstanding something here. i would think that it would go back down
to around 65om after a bit.
>>4) the solution is to create a nonclustered index on that col,
>>correct?
> Maybe.
> It will help if the statistics on the distribution of values is such
> that SQLServer thinks it's going to be cheaper to use the index. This
> is generally the case if you're going to return just a few rows. But
> if your query is going to return more than about 20% of the table, an
> index won't help, scans are generally much faster. If the values
> being selected are all contiguous, a clustered index may still be
> faster than a complete scan. As always, "it depends" is applicable.
it will return probably around 3 to 90 rows at the max. so the index will
probably help me out most in this situation.
>>5) is it ok if i add more than one column to that nonclustered index?
> OK by me.
> SQLServer only remembers the statistics for the first column, and an
> index on two columns is less efficient at fetching values selected by
> only one column, but if your query is by two fields, putting an index
> on both at once is called a "covering index" and becomes an excellent
> idea!
so if i have 2 queries that use 2 diff cols, then its better to create 2
seperate nonclustered indexes?
thx much|||Some comments inline:
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
Blog: http://solidqualitylearning.com/blogs/tibor/
"usenetjb" <usenet.20.jimbo-black@.antichef.net> wrote in message
news:Xns972EE3850DCF4usenetjb@.207.115.17.102...
> jxstern <jxstern@.nowhere.xyz> wrote in
> news:t1o6q19libh9i8cucou27cgce5254e52p3@.4ax.com:
>> On Sat, 17 Dec 2005 00:18:12 GMT, usenetjb
>> <usenet.20.jimbo-black@.antichef.net> wrote:
> thx just to confirm
>>1) for a given table that has around 26869141 rows, a query against a
>>nonindexed column such as << select * from table where
>>somenonindexedcol = 25 >> seems to take a lot of memory as it runs.
>>it must have to do a table scan and store the results in memory,
>>correct?
>> Well, if it's *around* 26m, then yes. :)
>> It doesn't store the results in memory, but as long as it has to read
>> them into memory to look at momentarily, it uses available memory to
>> cache them, just in case you ask for them again soon.
> yeah i understand. that explains why i saw the memory usage go way up
> after that query ran.
>
>>2) sql server never releases that memory to the os i think (even with
>>a max memory limit set)?
>> It will release it depending on competing demands.
> well, i saw the mem usage go way up to aroun 1.5 gigs, and never really
> went down to the pre-query state around 650M. so maybe i'm
> misunderstanding something here. i would think that it would go back down
> to around 65om after a bit.
Why? How would SQL Server know that you won't refer to those particular pages soon again? The
purpose of caching data is for the data to be in cache next tome someone refers to the page. SQL
Server will not release pages *unless the machine is memory constrained*. See:
http://support.microsoft.com/default.aspx?scid=kb;en-us;q321363
http://www.mssqlserver.com/faq/troubleshooting-memoryleak.asp
>>4) the solution is to create a nonclustered index on that col,
>>correct?
>> Maybe.
>> It will help if the statistics on the distribution of values is such
>> that SQLServer thinks it's going to be cheaper to use the index. This
>> is generally the case if you're going to return just a few rows. But
>> if your query is going to return more than about 20% of the table, an
>> index won't help, scans are generally much faster. If the values
>> being selected are all contiguous, a clustered index may still be
>> faster than a complete scan. As always, "it depends" is applicable.
> it will return probably around 3 to 90 rows at the max. so the index will
> probably help me out most in this situation.
>
>>5) is it ok if i add more than one column to that nonclustered index?
>> OK by me.
>> SQLServer only remembers the statistics for the first column, and an
>> index on two columns is less efficient at fetching values selected by
>> only one column, but if your query is by two fields, putting an index
>> on both at once is called a "covering index" and becomes an excellent
>> idea!
> so if i have 2 queries that use 2 diff cols, then its better to create 2
> seperate nonclustered indexes?
Possibly. You create indexes to support your queries. In order to comment what is best for you we
would know what the queries you want to support look like.
> thx much

Tuesday, March 20, 2012

Nonempty Problem

following query is nt working properly. need a solution (need to filter the non empty rows)

SELECT
{[Measures].[Description],[Measures].[Amount]} ON COLUMNS,
non empty CROSSJOIN ([CustomTimeSet],[GeneralLedgerSet]) on rows

FROM Profitability
WHERE [Account—ETBillingCode].[MDA]NonEmpty({filter(CROSSJOIN([CustomTimeSet],[GeneralLedgerSet]),[Measures].[MdaCodeTotal] <> 0 )})on rows

I am a bit suspicious about the [Measures].[Description] measure. Is this a calculated measure? It could be what is causing the non empty clause not to work. If you are using SSAS 2005, something like the following might work:

SELECT
{[Measures].[Description],[Measures].[Amount]} ON COLUMNS,
nonempty( CROSSJOIN ([CustomTimeSet],[GeneralLedgerSet]) , {Measures.MdaCodeTotal})on rows

FROM Profitability
WHERE [Account—ETBillingCode].[MDA]

Note I am using the second parameter in the NonEmpty() function to set the measure context for the non empty evaluation.

Monday, March 12, 2012

non-bitwise exclusive or?

Is there a way to do a logical exclusive OR (XOR) in sql server?

I'm trying to do this in where clause, something like:

WHERE
(not exists (select 1 from table a where a.date > '01/30/03') XOR
exists (select 1 from table a where a.date < '01/30/03'))

Thanks!no idea what you mean by "exclusive or" -- do you mean one is true or the other is true but not both false and not both true?|||Originally posted by r937
no idea what you mean by "exclusive or" -- do you mean one is true or the other is true but not both false and not both true?

Regular "OR" logic with boolean values a & b:

a = true, b = false returns TRUE
a = false, b = true returns TRUE
a = true, b = true returns TRUE
a = false, b = false returns FALSE

Exclusive "OR" logic (XOR) with same:

a = true, b = false returns TRUE
a = false, b = true returns TRUE
a = true, b = true returns FALSE
a = false, b = false returns FALSE

In other words XOR returns true when ONLY one of the two arguments is true, but not both.|||the general structure is:

declare @.a int
declare @.b int

set @.a=1
set @.b=1

if not ( (@.a=1) AND (@.b=1) ) AND not((@.a=0) AND (@.b=0))|||okay, translating this to your example, your two conditions are

a: not exists (select 1 from table a where a.date > '01/30/03'
b: exists (select 1 from table a where a.date < '01/30/03')

and you want exactly one of these to be true

that's easy!

WHERE (
(not exists (select 1 from table a
where a.date > '01/30/03'))
AND
NOT (exists (select 1 from table a
where a.date < '01/30/03'))
)
OR (
NOT (not exists (select 1 from table a
where a.date > '01/30/03'))
AND (exists (select 1 from table a
where a.date < '01/30/03'))
)|||[QUOTE][SIZE=1]Originally posted by r937
okay, translating this to your example, your two conditions are

yeah but I was hoping to be able to do it using a single operator like you can in ORACLE, which uses XOR. ;) O well, guess my monstrous SQL will get more so, and more proof that microsoft products suck hehe|||rather than look at the shortcomings of the database language, i would examine my conditions a little more closely

the WHERE clause comes down to either

1) table1 contains no dates at all, or else at most one date, 01/30/03
or
2) table1 contains at least one date on each side of 01/30/03, and possibly even 01/30/03 itself

i cannot imagine where this might be a real world situation that you want to filter for, but i can tell you one thing, i would certainly code it so that the purpose of the filtering is crystal clear, rather than obfuscate the meaning using some fancy logical operator that the next guiy to maintain the code (which could be you yourself six months from now) first has to figure out before even touching the code

just because a particular language feature exists doesn't mean it is appropriate to use it|||I think I was somewhat unclear. I actually just made up the two date clauses. My real clauses are very very large and so I didnt want to replicate them in here because that would have just been confusing.

I didn't realize that XOR was such a fancy operator ;)|||okay, i understand

good luck

you're gonna need it, eh :cool:|||We are sincerely sorry your code is bloated. Please accept our deepest condolences.|||I suppose you could write an XOR function. Deosn't seem like it would be that hard.|||You would have to pass boolean values as parameters, or convert them to bits (in which case you could just use the bitwise EXCLUSIVE OR operator ("^").|||Hmm. Thinking about it, it actually would be very hard to write.|||where
(
case
when (complex_clause_1_true )
then 1
else 0
end
^
case
when (complex_clause_2_true )
then 1
else 0
end
) = 1|||HanafiH, that is insidious, devious, marginally incomprehensible...

... and gorgeous!

nice one

i would put a great big humungous comment next to that ^ explaining that it's not a typo, it's an actual legitimate operator

first time i've seen it, and i had to go look it up in BOL to confirm

kudos to you|||The logic is simple. It's writing it as a function that is the problem.

Unfortunately, a returned boolean result is not a valid SQL server datatype and so can't be passed as a parameter without converting it first. And once you convert it you can use any of the bitwise operators, so passing them to a function doesn't gain you much.

My preferred conversion would be:
select cast(count(*) as bit) from table
...but that's just because I have a mild aversion to CASE statements.|||count(*) ? You would really rather do that? Man, you ain't seen the data then!|||Count(*) is very fast, because under most circumstances the optimizer need only pull the rowcount from the system tables.|||Count(*) is very fast, because under most circumstances the optimizer need only pull the rowcount from the system tables.
Is this true in MS-SQL... When I test this the query plan estimates reading the clustered index, and with a trace it appears that the entire index is being read. I tried this both before and after updating statistics.|||If you have additional filtering criteria it would need to resort to searching the index, so the strategy to use will ultimately depend on exactly what "complex_clause_1" and "complex_clause_2" are.

I'd be willing to bet that in blm14_cu's case the statement could be rewritten with much greater clarity using JOINs than his EXISTS clauses, after which he could use some of the Null-related functions such as ISNULL and NULLIF to implement his logic.

blm14_cu is bashing SQL Server, but poorly written code is going to be difficult to translate to any platform.|||Originally posted by blindman
If you have additional filtering criteria it would need to resort to searching the index, so the strategy to use will ultimately depend on exactly what "complex_clause_1" and "complex_clause_2" are.

I'd be willing to bet that in blm14_cu's case the statement could be rewritten with much greater clarity using JOINs than his EXISTS clauses, after which he could use some of the Null-related functions such as ISNULL and NULLIF to implement his logic.

blm14_cu is bashing SQL Server, but poorly written code is going to be difficult to translate to any platform.

Absolutely. The XOR condition in relational terms is an inverse intersection and should be convertible to NOT IN (intersection) in almost all cases.

Friday, March 9, 2012

NON EMPTY issue

Apologies if this is a bit simple but it's doing my head in a bit

Why would the following MDX return 4 correct records

Code Snippet

SELECT { [Measures].[Signed Surface Area], [Measures].[Amount Signed] } ON COLUMNS ,

non empty{ DESCENDANTS( [Property].[Customer].[All Property], [Property].[Customer].[Customer] ) } ON ROWS

FROM [Lease]

WHERE ( [Time Calcs].[YTD],[Time].[Year].&[2007].&[2].&[5], [Currency].&[26], [Unit of Measure].&[2] )

however adding another measure returns 50 additional spurious measures, all with blank rows

Code Snippet

SELECT NON EMPTY { [Measures].[Signed Leases], [Measures].[Signed Surface Area], [Measures].[Amount Signed] } ON COLUMNS ,

NON EMPTY { DESCENDANTS( [Property].[Customer].[All Property], [Property].[Customer].[Customer] ) } ON ROWS

FROM [Lease]

WHERE ( [Time].[Year].&[2007].&[2].&[5], [Currency].&[26], [Time Calcs].[YTD], [Unit of Measure].&[2] )

The "signed leases" measure was the field added

Presumably you mean that adding the Signed Leases measure returns another 50 spurious rows, not measures? Is Signed Leases a calculated measure, and if so what is its definition? I'm guessing [Time Calcs].[YTD] is another calculated measure (and is probably the culprit) - what is its definition? Have you been using the Non_Empty_Behavior property anywhere?

Chris

|||

Chris,

Yes, you were correct in your assumption. TimeCalcs was OK, it was actually the fact that signed leases was calculated. Adding an iif statement to create a null sorted it

NON EMPTY

Hello
I have an MDX statement that looks like this:
SELECT
{ Measures.members } on Columns ,
{ CrossJoin( [Time].[Calendar].[2004].[May]:[Time].
[Calendar].[2004].[July],[Business].[User].[Business
Entity Id].[212].Children ) } on Rows
FROM UsageStats
And it returns this via a matrix report:
2004
|--|--|--|--|
| May | June | July |Total |
--|--|--|--|--|
Alan Smith | 2 | 3 | 2 | 7 |
Amy Marcus | | 3 | 3 | 6 |
Bob Fields | | | | |
Carry Grant | 1 | | | 1 |
--|--|--|--|--|
TOTAL | 3 | 6 | 5 | 14 |
--|--|--|--|--|
The report above represents hits to a website on a per-
Monthly basis for each user.
I would like to supress the empty rows so that Bob Fields,
above, won't show up at all. I tried using
NonEmptyCrossJoin like this:
SELECT
{ Measures.members } on Columns ,
{ NonEmptyCrossJoin( [Time].[Calendar].[2004].[May]:
[Time].[Calendar].[2004].[July],[Business].[User].
[Business Entity Id].[212].Children ) } on Rows
FROM UsageStats
and just NON EMPTY like this:
SELECT
{ Measures.members } on Columns ,
{ NonEmptyCrossJoin( [Time].[Calendar].[2004].[May]:
[Time].[Calendar].[2004].[July],[Business].[User].
[Business Entity Id].[212].Children ) } on Rows
FROM UsageStats
Both methods above supress the empty rows, but they also
supress anyone who hasn't had any hits to the site in the
first month, even if they do have hits in the 2nd or 3rd
months (so both Bob Fields, and Amy Marcus are both left
out of the report above).
Does anyone know how to possibly get around this? Any help
at all will be greatly appreciated!!! :D
Best regards
JanoYou might be able to use FILTER in conjunction with the CoalesceEmpty to
meet your need:
Example of CoalesceEmpty against Foodmart 2000 Sales cube to set
Measures.[Store Sales] to 0 for any Quarter that is empty:
=====================================================================WITH
MEMBER [Measures].[NonEmpty Sales] AS
'CoalesceEmpty(Measures.[Store Sales], 0)'
SELECT
CROSSJOIN( {[Time].[1997].children}, { [Measures].[NonEmpty
Sales] } )
ON COLUMNS,
CROSSJOIN( { [Store].[Store City].members }, {[Product].[Product
Department].members} )
ON ROWS
FROM Sales
=====================================================================
Now we add FILTER statement to remove any stores that did not have sales in
all Quarters:
=====================================================================WITH
MEMBER [Measures].[NonEmpty Sales] AS
'CoalesceEmpty(Measures.[Store Sales], 0)'
SELECT
CROSSJOIN( {[Time].[1997].children}, { [Measures].[NonEmpty
Sales] } )
ON COLUMNS,
FILTER(
CROSSJOIN( { [Store].[Store City].members }, {[Product].[Product
Department].members} ),
([Store].CurrentMember,
[Product].CurrentMember, [Measures].[NonEmpty Sales]) > 0 )
ON ROWS
FROM Sales
=====================================================================--
-- John H. Miller [MSFT]
-- SQL Server BI Product Group
----
--
"This posting is provided 'AS IS' with no warranties, and confers no
rights."
----
--
jhmiller@.online.microsoft.com
"Jano" <JanoBermudes@.microsoft.com> wrote in message
news:31d401c47eca$c1c7b200$a301280a@.phx.gbl...
> Hello
> I have an MDX statement that looks like this:
> SELECT
> { Measures.members } on Columns ,
> { CrossJoin( [Time].[Calendar].[2004].[May]:[Time].
> [Calendar].[2004].[July],[Business].[User].[Business
> Entity Id].[212].Children ) } on Rows
> FROM UsageStats
>
> And it returns this via a matrix report:
> 2004
> |--|--|--|--|
> | May | June | July |Total |
> --|--|--|--|--|
> Alan Smith | 2 | 3 | 2 | 7 |
> Amy Marcus | | 3 | 3 | 6 |
> Bob Fields | | | | |
> Carry Grant | 1 | | | 1 |
> --|--|--|--|--|
> TOTAL | 3 | 6 | 5 | 14 |
> --|--|--|--|--|
> The report above represents hits to a website on a per-
> Monthly basis for each user.
>
> I would like to supress the empty rows so that Bob Fields,
> above, won't show up at all. I tried using
> NonEmptyCrossJoin like this:
> SELECT
> { Measures.members } on Columns ,
> { NonEmptyCrossJoin( [Time].[Calendar].[2004].[May]:
> [Time].[Calendar].[2004].[July],[Business].[User].
> [Business Entity Id].[212].Children ) } on Rows
> FROM UsageStats
> and just NON EMPTY like this:
> SELECT
> { Measures.members } on Columns ,
> { NonEmptyCrossJoin( [Time].[Calendar].[2004].[May]:
> [Time].[Calendar].[2004].[July],[Business].[User].
> [Business Entity Id].[212].Children ) } on Rows
> FROM UsageStats
> Both methods above supress the empty rows, but they also
> supress anyone who hasn't had any hits to the site in the
> first month, even if they do have hits in the 2nd or 3rd
> months (so both Bob Fields, and Amy Marcus are both left
> out of the report above).
> Does anyone know how to possibly get around this? Any help
> at all will be greatly appreciated!!! :D
> Best regards
> Jano
>

Wednesday, March 7, 2012

NOLOCK usage

If we have a select statment using the NOLOCK hint ...
will it prevent other processes from
UPDATING/INSERTING/DELETING any of the table(s) it is
using within it query? Would really like to hear the
answer from an MS tech for validity purposes. Thanks and
have a great day.Jay,
From the BOL on FROM:
NOLOCK is equal to READUNCOMMITTED
READUNCOMMITTED
Specifies that dirty reads are allowed. This means that no shared locks are
issued and no exclusive locks are honored. Allowing dirty reads can result
in higher concurrency, but at the cost of lower consistency. If
READUNCOMMITTED is specified, it is possible to read an uncommitted
transaction or to read a set of pages rolled back in the middle of the read;
therefore, error messages may result. For more information about isolation
levels, see SET TRANSACTION ISOLATION LEVEL.
So, it absolutely will not prevent other processes from doing whatever they
want to the tables. One interesting error that you can get using NOLOCK is
Error 601 "could not continue scan due to data movement."
Russell Fields (not from Microsoft)
"Jay Kusch" <Jay.Kusch@.mm-games.com> wrote in message
news:014001c3927d$c453a810$a401280a@.phx.gbl...
> If we have a select statment using the NOLOCK hint ...
> will it prevent other processes from
> UPDATING/INSERTING/DELETING any of the table(s) it is
> using within it query? Would really like to hear the
> answer from an MS tech for validity purposes. Thanks and
> have a great day.

nodes does not seem to work when xml value is created through a select

Dear all,
While
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT (select @.x).query('//a')
works fine, a very similar query
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT t.c.query('.') from (select @.x).nodes('//a') t(c)
fails with
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near '.'.
Anyone knows whether this is expected/documented behaviour?
Thanks
PeterDerived table syntax in FROM clause requires table alias to follow it
(unlike scalar sub-query in SELECT).
You can achieve what you want with CROSS APPLY:
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT t2.c.query('.') from (select @.x) t1(x) CROSS APPLY t1.x.nodes('//a')
t2(c)
Best regards,
Eugene
--
This posting is provided "AS IS" with no warranties, and confers no rights.
"Peter" <pgp.coppens@.gmail.com> wrote in message
news:1138820267.235233.39270@.f14g2000cwb.googlegroups.com...
> Dear all,
> While
> DECLARE @.x xml
> SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
> SELECT (select @.x).query('//a')
> works fine, a very similar query
> DECLARE @.x xml
> SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
> SELECT t.c.query('.') from (select @.x).nodes('//a') t(c)
> fails with
> Msg 102, Level 15, State 1, Line 3
> Incorrect syntax near '.'.
> Anyone knows whether this is expected/documented behaviour?
> Thanks
> Peter
>|||That helps,
Thanks,
Peter.

nodes does not seem to work when xml value is created through a select

Dear all,
While
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT (select @.x).query('//a')
works fine, a very similar query
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT t.c.query('.') from (select @.x).nodes('//a') t(c)
fails with
Msg 102, Level 15, State 1, Line 3
Incorrect syntax near '.'.
Anyone knows whether this is expected/documented behaviour?
Thanks
Peter
Derived table syntax in FROM clause requires table alias to follow it
(unlike scalar sub-query in SELECT).
You can achieve what you want with CROSS APPLY:
DECLARE @.x xml
SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
SELECT t2.c.query('.') from (select @.x) t1(x) CROSS APPLY t1.x.nodes('//a')
t2(c)
Best regards,
Eugene
This posting is provided "AS IS" with no warranties, and confers no rights.
"Peter" <pgp.coppens@.gmail.com> wrote in message
news:1138820267.235233.39270@.f14g2000cwb.googlegro ups.com...
> Dear all,
> While
> DECLARE @.x xml
> SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
> SELECT (select @.x).query('//a')
> works fine, a very similar query
> DECLARE @.x xml
> SET @.x = '<ROOT><a>111</a><a>222</a><a>333</a><a>444</a></ROOT>'
> SELECT t.c.query('.') from (select @.x).nodes('//a') t(c)
> fails with
> Msg 102, Level 15, State 1, Line 3
> Incorrect syntax near '.'.
> Anyone knows whether this is expected/documented behaviour?
> Thanks
> Peter
>
|||That helps,
Thanks,
Peter.

Nocount

Hi, I'd like to know if i need to set Nocount On/Off in a proc only for
SELECT...
--
Please, check my theSpoke:
http://www.thespoke.net/MyBlog/dgroh/MyBlog.aspxPaperback Writer wrote:
> Hi, I'd like to know if i need to set Nocount On/Off in a proc only
> for SELECT...
It good practice to have a SET NOCOUNT ON at the top of every stored
procedure and also to execute upon initial connection if there is a
chance that ad-hoc SQL will be sent from the application to the server.
--
David Gugick
Quest Software
www.imceda.com
www.quest.com|||Its not needed really.
Bojidar Alexandrov
"Paperback Writer" <newsgroupms@.gmail.com> wrote in message
news:uDgS9M3ZFHA.3912@.TK2MSFTNGP10.phx.gbl...
> Hi, I'd like to know if i need to set Nocount On/Off in a proc only for
> SELECT...
> --
> Please, check my theSpoke:
> http://www.thespoke.net/MyBlog/dgroh/MyBlog.aspx
>|||hahahahahahahahah
Classic.
In our environment the single biggest performance gain was obtained by using
this set option but don't take anybodys word for it. Test for yourself.
When push came to shove and I had to convice the managers of our development
team I used this simple example.
dbcc dropcleanbuffers
go
dbcc freeproccache
go
set nocount on
go
DECLARE @.start_time DATETIME
SELECT @.start_time = GETDATE()
select * from BIG_SALES2
SELECT 'Elapsed Time, Msec' = DATEDIFF( Ms, @.start_time, GETDATE() )
GO
dbcc dropcleanbuffers
go
dbcc freeproccache
go
set nocount off
go
DECLARE @.start_time DATETIME
SELECT @.start_time = GETDATE()
select * from BIG_SALES2
SELECT 'Elapsed Time, Msec' = DATEDIFF( Ms, @.start_time, GETDATE() )
GO
results from first select
Elapsed Time, Msec
--
7516
results from second select
(99999 row(s) affected)
Elapsed Time, Msec
--
7780
"Bojidar Alexandrov" wrote:
> Its not needed really.
> Bojidar Alexandrov
> "Paperback Writer" <newsgroupms@.gmail.com> wrote in message
> news:uDgS9M3ZFHA.3912@.TK2MSFTNGP10.phx.gbl...
> > Hi, I'd like to know if i need to set Nocount On/Off in a proc only for
> > SELECT...
> >
> > --
> > Please, check my theSpoke:
> > http://www.thespoke.net/MyBlog/dgroh/MyBlog.aspx
> >
> >
>|||I had a loop with
BEGIN TRAN
UPDATE
COMMIT
And counter transaction per second. With the setting off, I has 800, with on, I had 2000...
--
Tibor Karaszi, SQL Server MVP
http://www.karaszi.com/sqlserver/default.asp
http://www.solidqualitylearning.com/
"Joe" <Joe@.discussions.microsoft.com> wrote in message
news:FFD0E059-E183-46A2-90F6-03902FD63FD8@.microsoft.com...
> hahahahahahahahah
> Classic.
> In our environment the single biggest performance gain was obtained by using
> this set option but don't take anybodys word for it. Test for yourself.
> When push came to shove and I had to convice the managers of our development
> team I used this simple example.
> dbcc dropcleanbuffers
> go
> dbcc freeproccache
> go
> set nocount on
> go
> DECLARE @.start_time DATETIME
> SELECT @.start_time = GETDATE()
> select * from BIG_SALES2
> SELECT 'Elapsed Time, Msec' = DATEDIFF( Ms, @.start_time, GETDATE() )
> GO
> dbcc dropcleanbuffers
> go
> dbcc freeproccache
> go
> set nocount off
> go
> DECLARE @.start_time DATETIME
> SELECT @.start_time = GETDATE()
> select * from BIG_SALES2
> SELECT 'Elapsed Time, Msec' = DATEDIFF( Ms, @.start_time, GETDATE() )
> GO
> results from first select
> Elapsed Time, Msec
> --
> 7516
> results from second select
> (99999 row(s) affected)
> Elapsed Time, Msec
> --
> 7780
>
> "Bojidar Alexandrov" wrote:
>> Its not needed really.
>> Bojidar Alexandrov
>> "Paperback Writer" <newsgroupms@.gmail.com> wrote in message
>> news:uDgS9M3ZFHA.3912@.TK2MSFTNGP10.phx.gbl...
>> > Hi, I'd like to know if i need to set Nocount On/Off in a proc only for
>> > SELECT...
>> >
>> > --
>> > Please, check my theSpoke:
>> > http://www.thespoke.net/MyBlog/dgroh/MyBlog.aspx
>> >
>> >
>>|||Tibor, the thing with update (or delete statements) is very clear that it
takes too much time to transfer these (x rows affected) as another
recordset, but original question was for Select statements only. When there
are staments returning stats like update and delete you are very right.
Im not sure that the example from Joe is right, so will test myself.
Bojidar Alexandrov
"Tibor Karaszi" <tibor_please.no.email_karaszi@.hotmail.nomail.com> wrote in
message news:%23%23fBUhFaFHA.616@.TK2MSFTNGP12.phx.gbl...
> I had a loop with
> BEGIN TRAN
> UPDATE
> COMMIT
> And counter transaction per second. With the setting off, I has 800, with
on, I had 2000...
> --
> Tibor Karaszi, SQL Server MVP
> http://www.karaszi.com/sqlserver/default.asp
> http://www.solidqualitylearning.com/
>
> "Joe" <Joe@.discussions.microsoft.com> wrote in message
> news:FFD0E059-E183-46A2-90F6-03902FD63FD8@.microsoft.com...
> > hahahahahahahahah
> > Classic.
> > In our environment the single biggest performance gain was obtained by
using
> > this set option but don't take anybodys word for it. Test for yourself.
> > When push came to shove and I had to convice the managers of our
development
> > team I used this simple example.
> > dbcc dropcleanbuffers
> > go
> > dbcc freeproccache
> > go
> > set nocount on
> > go
> > DECLARE @.start_time DATETIME
> > SELECT @.start_time = GETDATE()
> > select * from BIG_SALES2
> > SELECT 'Elapsed Time, Msec' = DATEDIFF( Ms, @.start_time, GETDATE() )
> > GO
> > dbcc dropcleanbuffers
> > go
> > dbcc freeproccache
> > go
> > set nocount off
> > go
> > DECLARE @.start_time DATETIME
> > SELECT @.start_time = GETDATE()
> > select * from BIG_SALES2
> > SELECT 'Elapsed Time, Msec' = DATEDIFF( Ms, @.start_time, GETDATE() )
> > GO
> >
> > results from first select
> >
> > Elapsed Time, Msec
> > --
> > 7516
> >
> > results from second select
> > (99999 row(s) affected)
> > Elapsed Time, Msec
> > --
> > 7780
> >
> >
> > "Bojidar Alexandrov" wrote:
> >
> >> Its not needed really.
> >>
> >> Bojidar Alexandrov
> >>
> >> "Paperback Writer" <newsgroupms@.gmail.com> wrote in message
> >> news:uDgS9M3ZFHA.3912@.TK2MSFTNGP10.phx.gbl...
> >> > Hi, I'd like to know if i need to set Nocount On/Off in a proc only
for
> >> > SELECT...
> >> >
> >> > --
> >> > Please, check my theSpoke:
> >> > http://www.thespoke.net/MyBlog/dgroh/MyBlog.aspx
> >> >
> >> >
> >>
> >>
>|||I've just tested out and do not think you are right. Mine results from 10
trys on table with about 100k rows are on the average same.
Bojidar Alexandrov
"Joe" <Joe@.discussions.microsoft.com> wrote in message
news:FFD0E059-E183-46A2-90F6-03902FD63FD8@.microsoft.com...
> hahahahahahahahah
> Classic.
> In our environment the single biggest performance gain was obtained by
using
> this set option but don't take anybodys word for it. Test for yourself.
> When push came to shove and I had to convice the managers of our
development
> team I used this simple example.
> dbcc dropcleanbuffers
> go
> dbcc freeproccache
> go
> set nocount on
> go
> DECLARE @.start_time DATETIME
> SELECT @.start_time = GETDATE()
> select * from BIG_SALES2
> SELECT 'Elapsed Time, Msec' = DATEDIFF( Ms, @.start_time, GETDATE() )
> GO
> dbcc dropcleanbuffers
> go
> dbcc freeproccache
> go
> set nocount off
> go
> DECLARE @.start_time DATETIME
> SELECT @.start_time = GETDATE()
> select * from BIG_SALES2
> SELECT 'Elapsed Time, Msec' = DATEDIFF( Ms, @.start_time, GETDATE() )
> GO
> results from first select
> Elapsed Time, Msec
> --
> 7516
> results from second select
> (99999 row(s) affected)
> Elapsed Time, Msec
> --
> 7780
>
> "Bojidar Alexandrov" wrote:
> > Its not needed really.
> >
> > Bojidar Alexandrov
> >
> > "Paperback Writer" <newsgroupms@.gmail.com> wrote in message
> > news:uDgS9M3ZFHA.3912@.TK2MSFTNGP10.phx.gbl...
> > > Hi, I'd like to know if i need to set Nocount On/Off in a proc only
for
> > > SELECT...
> > >
> > > --
> > > Please, check my theSpoke:
> > > http://www.thespoke.net/MyBlog/dgroh/MyBlog.aspx
> > >
> > >
> >
> >