What data type should I use for data that can be very short, eg. html link (think twitter), or very long eg. html blog post (think wordpress).
I am thinking if I use varchar(4000), it maybe too short for a html formated blog entry? but if I use text, it will take up more space and is less efficient?
I am using the insert() function from Zend_Db_Table_Abstract.
The data being inserted is user input, so naturally I am curious if ZF does the data cleansing for me, or if I should do it myself before I call the insert() function.
I have a large number of rows that I would like to copy, but I need to change one field.
I can select the rows that I want to copy:
select * from Table where Event_ID = "120"
Now I want to copy all those rows and create new rows while setting the Event_ID to 155. How can I accomplish this?
I am trying to write a query to select all records from users table where User_DateCreated (datetime field) is = 3 months from today.
Any ideas? Thanks!
I design my database incorrectly, should I fix this while its in development?
"user" table is suppose to have a 1.1 relationship with "userprofile" table
however the actual design the "user" table has a 1.* relationship with "userprofile" table.
Everything works! but should it be fixed anyways?
I've a typical scenario & need to understand best possible way to handle this, so here it goes -
I'm developing a solution that will retrieve data from a remote SOAP based web service & will then push this data to an Oracle database on network.
Also, this will be a scheduled task that will execute every 15 minutes.
I've event queues on remote service that contains the INSERT/UPDATE/DELETE operations that have been done since last retrieval, & once I retrieve the events for last 15 minutes, it again add events for next retrieval.
Now, its just pushing data to Oracle so all my interactions are INSERT & UPDATE statements.
There are around 60 tables on Oracle with some of them having 100+ columns. Moreover, for every 15 minutes cycle there would be around 60-70 Inserts, 100+ Updates & 10-20 Deletes.
This will be an executable jar file that will terminate after operation & will again start on next 15 minutes cycle.
So, I need to understand how should I handle WRITE operations (best practices) to improve performance for this application as whole ?
Current Test Code (on every cycle) -
Connects to remote service to get events.
Creates a connection with DB (single connection object).
Identifies the type of operation (INSERT/UPDATE/DELETE) & table on which it is done.
After above, calls the respective method based on type of operation & table.
Uses Preparedstatement with positional parameters, & retrieves each column value from remote service & assigns that to statement parameters.
Commits the statement & returns to get event class to process next event.
Above is repeated till all the retrieved events are processed after which program closes & then starts on next cycle & everything repeats again.
Thanks for help !
Hi
In my mode I am selecting a field as
$query1 = $this->db->query("SELECT dPassword
FROM tbl_login
WHERE dEmailID='[email protected]'");
How to return dpassword as a variable to my controller
I tried this way return dpassword;
how do i write a query that returns aggregate sales data for California in the past x months.
----------------------- -----------------------
| order | | customer |
|-----------------------| |-----------------------|
| orderId int | | customerId int |
| customerId int | | state varchar |
| deposit decimal | -----------------------
| orderDate date |
-----------------------
-----------------------
| orderItem |
|-----------------------|
| orderId int |
| itemId int |
| qty int |
| lineTotal decimal |
| itemPrice decimal |
-----------------------
i have this requirement and since im new to vb.net dont really have much of idea how to do this. I have 20 checkboxes with dropdowns and textbox with it. the example is -
table
tr
td
checkbox -- textbox -- dropdownlist
/td
/tr
tr
td
chk1 txtbox1 ddl1
/td
/tr
tr
td
chk2 txtbox2 ddl2
/td
/tr
and so on.
the above structure shall be in one row of a table. does anyone know how to design this in code recursive and also how to take the checkbox data from here and send it to db table for records insert, update and select.
thanks
I have an query like:
SELECT id as OfferId FROM offers
WHERE concat(partycode, connectioncode) = ?
AND CURDATE() BETWEEN offer_start_date
AND offer_end_date AND id IN ("121211, 123341,151512,5145626 ");
Now I want to cache the results of this query using memcache and so my question is
How can I cache an query using memcache.
I am currently using CURDATE() which cannot be used if we want to implement caching and so how can I get current date functionality without using CURDATE() function ?
Imagine that we have a query like this:
select a.col1, b.col2
from t1 a
inner join t2 b on a.col1 = b.col2
where a.col1 = 'abc'
Both col1 and col2 don't have any index.
If I add another restriction on the where clause, one that is always correct but with a column with an index:
select a.col1, b.col2
from t1 a
inner join t2 b on a.col1 = b.col2
where a.col1 = 'abc'
and a.id >= 0 -- column always true and with index
May the query perform faster since it may use the index on id column?
My database has around 25 core numbers, in that weekly basis I need to create an index and drop index. While creating the index it takes long time to complete, my log file also keeps on increasing, and when I delete some numbers from that table also taking too much time (because weekly basis I have to delete 30 to 50 lack numbers and add 30 to 40 lack new number also).
Can u please give me the proper solution..
$activeQuery = mysql_query("SELECT count(`status`) AS `active` FROM `assignments` WHERE `user` = $user_id AND `status` = 0");
$active = mysql_fetch_assoc($activeQuery);
$failedQuery = mysql_query("SELECT count(`status`) AS `failed` FROM `assignments` WHERE `user` = $user_id AND `status` = 1");
$failed = mysql_fetch_assoc($failedQuery);
$completedQuery = mysql_query("SELECT count(`status`) AS `completed` FROM `assignments` WHERE `user` = $user_id AND `status` = 2");
$completed = mysql_fetch_assoc($completedQuery);
There has to be a better way to do that, right? I don't know how much I need to elaborate as you can see what I'm trying to do, but is there any way to do all of that in one query? I need to be able to output the active, failed, and completed assignments, preferably in one query.
CREATE TABLE NewTable AS
SELECT A,B,C FROM Table1
minus
SELECT A, B, C From Table2
Create a new table with NULL values for column A
when neither Table1 or Table2 had NULL values for this column.
Yet,
SELECT * FROM
(
SELECT A,B,C FROM Table1
minus
SELECT A, B, C From Table2
)
WHERE A IS NULL
return 0 rows?
I'm trying to do pagination with a very old version of DB2 and the only way I could figure out selecting a range of rows was to use the OVER command.
This query provide's the correct results (the results that I want to paginate over).
select MIN(REFID) as REFID, REFGROUPID from ARMS_REFERRAL where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc
Results:
REFID REFGROUPID
302 242
301 241
281 221
261 201
225 142
221 161
... ...
SELECT * FROM ( SELECT row_number() OVER () AS rid, MIN(REFID) AS REFID, REFGROUPID FROM arms_referral where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc ) AS t WHERE t.rid BETWEEN 1 and 5
Results:
REFID REFGROUPID
26 12
22 11
14 8
11 7
6 4
As you can see, it does select the first five rows, but it's obviously not selecting the latest.
If I add a Order By clause to the OVER() it gets closer, but still not totally correct.
SELECT * FROM ( SELECT row_number() OVER (ORDER BY REFGROUPID desc) AS rid, MIN(REFID) AS REFID, REFGROUPID FROM arms_referral where REFERRAL_ID<>'Draft' and REFERRAL_ID not like 'Demo%' group by REFGROUPID order by REFID desc ) AS t WHERE t.rid BETWEEN 1 and 5
REFID REFGROUPID
302 242
301 241
281 221
261 201
221 161
It's really close but the 5th result isn't correct (actually the 6th result).
How do I make this query correct so it can group by a REFGROUPID and then order by the REFID?
i have a column with this data:
IT_AMPH
IT_BARB
IT_BENZ
IT_BUP
SOMA
i want the column next to it to be literarely
=like "*,IT_AMPH,*"
=like "*,IT_BARB,*"
=like "*,IT_BENZ,*"
etc
please note that i want the equal signed to be displayed, exactly as shown above
what would be the formula for this?
Hello,
I am running into a wall regarding changing the password and was wondering if anyone had any ideas. Here are the database values prior to changing the password:
Clear Text password = abc1980
Encrypted Password = Yn1N5l+4AUqkOM3WYO7ww/sCN+o=
Salt = 82qVIhUIoblBRIRvFSZ1fw==
After I change my password to abc1973, salt remains the same, but the Encrypted Password changes which is supposed to happen:
Encrypted Password = rHtjLq3qxAl/7T1GfkxrsHzPsNk=
However, when I try to login with abc1973 as the password, it does not login. If I try abc1980, it logs me in. It is updating the database, is it caching the values somewhere?
Any ideas?
I have a this table, where I store multiple ids and an age range (def1,def2)
CREATE TABLE "template_requirements" ("_id" INTEGER NOT NULL,
"templateid" INTEGER,
"def1" VARCHAR(255),
"def2" VARCHAR(255),
PRIMARY KEY("_id"))
Having values such as:
templateid | def1 | def2
100 | 7 | 25
200 | 40 | 90
300 | 7 | 25
300 | 40 | 60
as you see for templateid 300 we have an or logic: age between 7 and 25 or age between 40 and 60.
I want to get all the template ids that are not for a certain age like 25...
What's the problem?
If I run a query like this one:
SELECT group_concat(templateid) FROM template_requirements where
and '25' not between cast(def1 as integer) and
cast(def2 as integer)
it returns 200, 300, which is wrong, as the 300 matched on row 40 to 60, but shouldn't be included in the result as we have a condition with same templateid 7 to 25 that fails the not beetween stuff.
How would be the correct query in SQLite, I would like to keep the group_concat stuff.
I have a database that contains data for many "clients". Currently, we insert tens of thousands of rows into multiple tables every so often using .Net SqlBulkCopy which causes the entire tables to be locked and inaccessible for the duration of the transaction.
As most of our business processes rely upon accessing data for only one client at a time, we would like to be able to load data for one client, while updating data for another client.
To make things more fun, all PKs, FKs and clustered indexes are on GUID columns (I am looking at changing this).
I'm looking at adding the ClientID into all tables, then partitioning on this. Would this give me the functionality I require?