I see many people use subqueries or else window functions to do this, but I often do this kind of query without subqueries in the following way. It uses plain, standard SQL so it should work in any brand of RDBMS.
SELECT t1.*
FROM mytable t1
LEFT OUTER JOIN mytable t2
ON (t1.UserId = t2.UserId AND t1."Date" < t2."Date")
WHERE t2.UserId IS NULL;
In other words: fetch the row from t1
where no other row exists with the same UserId
and a greater Date.
(I put the identifier "Date" in delimiters because it's an SQL reserved word.)
In case if t1."Date" = t2."Date"
, doubling appears. Usually tables has auto_inc(seq)
key, e.g. id
.
To avoid doubling can be used follows:
SELECT t1.*
FROM mytable t1
LEFT OUTER JOIN mytable t2
ON t1.UserId = t2.UserId AND ((t1."Date" < t2."Date")
OR (t1."Date" = t2."Date" AND t1.id < t2.id))
WHERE t2.UserId IS NULL;
Re comment from @Farhan:
Here's a more detailed explanation:
An outer join attempts to join t1
with t2
. By default, all results of t1
are returned, and if there is a match in t2
, it is also returned. If there is no match in t2
for a given row of t1
, then the query still returns the row of t1
, and uses NULL
as a placeholder for all of t2
's columns. That's just how outer joins work in general.
The trick in this query is to design the join's matching condition such that t2
must match the same userid
, and a greater date
. The idea being if a row exists in t2
that has a greater date
, then the row in t1
it's compared against can't be the greatest date
for that userid
. But if there is no match -- i.e. if no row exists in t2
with a greater date
than the row in t1
-- we know that the row in t1
was the row with the greatest date
for the given userid
.
In those cases (when there's no match), the columns of t2
will be NULL
-- even the columns specified in the join condition. So that's why we use WHERE t2.UserId IS NULL
, because we're searching for the cases where no row was found with a greater date
for the given userid
.
SQL Server 2000, 2005, 2008, 2012, 2014, 2016, 2017 or 2019:
SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_TYPE='BASE TABLE'
To show only tables from a particular database
SELECT TABLE_NAME
FROM [<DATABASE_NAME>].INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
Or,
SELECT TABLE_NAME
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
AND TABLE_CATALOG='dbName' --(for MySql, use: TABLE_SCHEMA='dbName' )
PS: For SQL Server 2000:
SELECT * FROM sysobjects WHERE xtype='U'
Best Solution
Quote:
A column isn't an object. If you mean that you expect the column name to be like '%DTN%', the query you want is:
But if the 'DTN' string is just a guess on your part, that probably won't help.
By the way, how certain are you that '1/22/2008P09RR8' is a value selected directly from a single column? If you don't know at all where it is coming from, it could be a concatenation of several columns, or the result of some function, or a value sitting in a nested table object. So you might be on a wild goose chase trying to check every column for that value. Can you not start with whatever client application is displaying this value and try to figure out what query it is using to obtain it?
Anyway, diciu's answer gives one method of generating SQL queries to check every column of every table for the value. You can also do similar stuff entirely in one SQL session using a PL/SQL block and dynamic SQL. Here's some hastily-written code for that:
There are some ways you could make it more efficient too.
In this case, given the value you are looking for, you can clearly eliminate any column that is of NUMBER or DATE type, which would reduce the number of queries. Maybe even restrict it to columns where type is like '%CHAR%'.
Instead of one query per column, you could build one query per table like this: