Search Results

Search found 32223 results on 1289 pages for 'sql 2012'.

Page 748/1289 | < Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >

  • MySQL "NULL" questions

    - by Camran
    I have a table with several columns. Sometimes some of these column fields may be empty (ie. I won't use them in some cases). My questions: Would it be smart to set them to NULL in phpmyadmin? What does the "NULL" property actually do? Would I gain anything at all by setting them to NULL? Is it possible to use a NULL field the same way even though it is set to null?

    Read the article

  • running a stored procedure inside a sql trigger

    - by Ying
    Hi all, the business logic in my application requires me to insert a row in table X when a row is inserted in table Y. Furthermore, it should only do it between specific times(which I have to query another table for). I have considered running a script every 5 minutes or so to check this, but I stumbled upon triggers and figured this might be a better way to do it. But I find the syntax for procedures a little bewildering and I keep getting an error I have no idea how to fix. Here is where I start: CREATE TRIGGER reservation_auto_reply AFTER INSERT ON reservation FOR EACH ROW BEGIN IF NEW.sent_type = 1 /* In-App */ THEN INSERT INTO `messagehistory` (`trip`, `fk`, `sent_time`, `status`, `message_type`, `message`) VALUES (NEW.trip, NEW.psk, 'NOW()', 'submitted', 4, 'This is an automated reply to reservation'); END; I get an error in the VALUES part of the statmenet but im not sure where. I still have to query the other table for the information I need, but I can't even get past this part. Any help is appreciated, including links to many examples..Thanks

    Read the article

  • Get previous and next row from current id

    - by Hukr
    How can I do to get the next row in a table? `image_id` int(11) NOT NULL auto_increment `image_title` varchar(255) NOT NULL `image_text` mediumtext NOT NULL `image_date` datetime NOT NULL `image_filename` varchar(255) NOT NULL If the current image is 3 for example and the next one is 7 etc. this won’t work: $query = mysql_query("SELECT * FROM images WHERE image_id = ".intval($_GET['id'])); echo $_GET['id']+1; How should I do? thanks

    Read the article

  • Count total number of callers?

    - by Kristopher Ives
    I'm currently doing this query to find the guy who makes the most calls: SELECT `commenter_name`, COUNT(*) AS `calls` FROM `comments` GROUP BY `commenter_name` ORDER BY `calls` LIMIT 1 What I want now is to be able to find out how many total unique callers. I tried using DISTINCT but I didn't get anywhere.

    Read the article

  • LINQ - if condition

    - by ile
    In code, the commented part is what I need to solve... Is there a way to write such query in LINQ? I need this because I will need sorting based on Status. var result = ( from contact in db.Contacts join user in db.Users on contact.CreatedByUserID equals user.UserID join deal in db.Deals on contact.ContactID equals deal.ContactID into deals orderby contact.ContactID descending select new ContactListView { ContactID = contact.ContactID, FirstName = contact.FirstName, LastName = contact.LastName, Email = contact.Email, Deals = deals.Count(), EstValue = deals.Sum(e => e.EstValue), SalesAgent = user.FirstName + " " + user.LastName, Tasks = 7, // This is critical part if(Deals == 0) Status = "Prospect"; else Status = "Client"; // End of critical part... }) .OrderBy(filterQuery.OrderBy + " " + filterQuery.OrderType) .Where(filterQuery.Status);

    Read the article

  • Best .NET Solution for Frequently Changed Database

    - by sestocker
    I am currently architecting a small CRUD applicaton. Their database is a huge mess and will be changing frequently over the course of the next 6 months to a year. What would you recommend for my data layer: 1) ORM (if so, which one?) 2) Linq2Sql 3) Stored Procedures 4) Parametrized Queries I really need a solution that will be dynamic enough (both fast and easy) where I can replace tables and add/delete columns frequently. Note: I do not have much experience with ORM (only a little SubSonic) and generally tend to use stored procedures so maybe that would be the way to go. I would love to learn Ling2Sql or NHibernate if either would allow for the situation I've described above.

    Read the article

  • Concurrent usage of table causing issues

    - by Sven
    Hello In our current project we are interfacing with a third party data provider. They need to insert data in a table of ours. This inserting can be frequent every 1 min, every 5min, every 30, depends on the amount of new data they need to provide. The use the isolation level read committed. On our end we have an application, windows service, that calls a webservice every 2 minutes to see if there is new data in this table. Our isolation level is repeatable read. We retrieve the records and update a column on these rows. Now the problem is that sometimes this third party provider needs to insert a lot of data, let's say 5000 records. They do this per transaction (5rows per transaction), but they don't close the connection. They do one transaction and then the next untill all records are inserted. This caused issues for our process, we receive a timeout. If this goes on for a long time the database get's completely unstable. For instance, they maybe stopped, but the table somehow still stays unavailable. When I try to do a select on the table, I get several records but at a certain moment I don't get any response anymore. It just says retrieving data but nothing comes anymore until I get a timeout exception. Only solution is to restart the database and then I see the other records. How can we solve this. What is the ideal isolation level setting in this scenario?

    Read the article

  • Multiple ID's in database

    - by eric
    I have a database that contains a few tables such as person, staff, member, and supporter. The person table contains information about every staff, member, and supporter. The information it contains is name,address,email, and telephone. I also created an id that is the primary key. My issue is that I also have an primary key ID for staff, member, and supporter. For instance, in the person table is John with id 1. He is a supporter so in the supporter table is pID(for person id)to reference back to John with all his information and ID(for supporter ID). pID references to the person table and every person has an ID incremented by 1 starting at 1. supporter ID is for every supporter and also starts at 1 and is incremented by 1. Is it possible to have in the supporter table pID = 1 and supporter ID = 1? Another person may have a pID = 26 and supporter ID = 5. Or will supporter ID have to be different than the pID and be something like "sup"? So you would have pID = 1 and supporter ID = sup1 or pID = 26 and supporter ID = sup5

    Read the article

  • Get count matches in query on large table very slow

    - by Roy Roes
    I have a mysql table "items" with 2 integer fields: seid and tiid The table has about 35000000 records, so it's very large. seid tiid ----------- 1 1 2 2 2 3 2 4 3 4 4 1 4 2 The table has a primary key on both fields, an index on seid and an index on tiid. Someone types in 1 or more tiid values and now I would like to get the seid with most results. For example when someone types 1,2,3, I would like to get seid 2 and 4 as result. They both have 2 matches on the tiid values. My query so far: SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid HAVING c = (SELECT COUNT(*) as c, seid FROM items WHERE tiid IN (1,2,3) GROUP BY seid ORDER BY c DESC LIMIT 1) But this query is extremly slow, because of the large table. Does anyone know how to construct a better query for this purpose?

    Read the article

  • Is a primary key automatically an index?

    - by Lieven Cardoen
    If I run Profiler, then it suggests a lot of indexes like this one CREATE CLUSTERED INDEX [_dta_index_Users_c_9_292912115__K1] ON [dbo].[Users] ( [UserId] ASC )WITH (SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF) ON [PRIMARY] UserId is the primary key of the table Users. Is this index better than the one already in the table: ALTER TABLE [dbo].[Users] ADD CONSTRAINT [PK_Users] PRIMARY KEY NONCLUSTERED ( [UserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]

    Read the article

  • hibernate restrictions.in with and, how to use?

    - by cometta
    I have table like below id, employee_no, survey_no, name 1 test 1 test_name 2 test2 1 test_name2 3 test3 1 test_name3 4 test4 2 test_name4 how to query with Restriction.in by combining below AND into one IN statement? IN[ (if(survey_no==1) && employee_no== 'test') , (if(survey_no==1) && employee_no== 'test2') , ... ]

    Read the article

  • FOSS version of SQLCompare or something similar?

    - by Scott
    Actually, free is good enough, it doesn't have to be open source :) I'm currently using the Schema Compare utility of VS2008, but it doesn't have a command line interface and has some other weaknesses as well. I'm wondering what free tools others are using to provide command line schema comparisons/synchronizations? Thanks.

    Read the article

  • PreparedStatement.setString() method without quotes

    - by Slavko
    I'm trying to use a PreparedStatement with code similar to this: SELECT * FROM ? WHERE name = ? Obviously, what happens when I use setString() to set the table and name field is this: SELECT * FROM 'my_table' WHERE name = 'whatever' and the query doesn't work. Is there a way to set the String without quotes so the line looks like this: SELECT * FROM my_table WHERE name = 'whatever' or should I just give it up and use the regular Statement instead (the arguments come from another part of the system, neither of those is entered by a user)?

    Read the article

  • Calculating percent of votes inside mysql statement.

    - by Beck
    UPDATE polls_options SET `votes`=`votes`+1, `percent`=ROUND((`votes`+1) / (SELECT voters FROM polls WHERE poll_id=? LIMIT 1) * 100,1) WHERE option_id=? AND poll_id=? Don't have table data yet, to test it properly. :) And by the way, in what type % integers should be stored in database? Thanks for the help!

    Read the article

  • .tpl files and website problem

    - by whitstone86
    Apologies if the title is in lowercase but it's describing an extension format. I've started using Dwoo as my template engine for PHP, and am not sure how to convert my PHP files into .tpl templates. My site is similar to, but not the same as, http://library.digiguide.com/lib/programme/Medium-319648/Drama/ with its design (except colour scheme and site name are different, plus it's in PHP - so copyright issues are avoided here, the design arguably could be seen as parody even though the content is different. The database is called tvguide, and it has these tables: Programmes House M.D. Medium Police Stop! American Dad! The tablenames of the above programmes are: housemdonair mediumonair policestopair americandad1 Episodes The tablenames for the above programmes' episode guides are: housemdepidata mediumepidata policestopepidata americandad1epidata All of them have the following rows: id (not an auto-increment, since I wish to dynamically generate a page from this) episodename seriesnumber episodenumber episodesynopsis (the above four after id do exactly as stated) I have a pagination script that works, it displays 20 records per page as I want it to. This is called pmcPagination.php - but I won't post it in full since it would take up too much space. However, I'm trying to get it so that variables are filled in like this: (ok, so the examples below are ASP.NET, but if there's a PHP/MySQL equivalent I would gratefully appreciate this!!): http://library.digiguide.com/lib/episode/741168 http://library.digiguide.com/lib/episode/714829 with the episode detail and data. My site works, but it's fairly basic, and it's not online yet until my bugs are fixed. Mod_rewrite is enabled so my site reads as http://mytvguide.com/episode/123456 or http://mytvguide.com/programme/123456 http://mytvguide.com/WorldInAction/123456/Documentary/ I've tried looking on Google, but am not sure how to get this TV guide script to work at its best - but I think .tpl, and .php/MySQL is the way to go. Any advice anyone has on making this project into a fully workable, ready to use site would be much appreciated, I've spent months refining this project! P.S. Apologies for the length of this, hope it describes my project well.

    Read the article

  • C# - Password Database

    - by user335932
    So I want to make a program that allows you to store and search for user names/passwords for online sites there signed up to. I know C# has some database options but I don't know much about it. I also heard that it can read/write excel files. Whats do you think is best for storing the data? ALSO do databases need to be stored online on a sever, or can they reside in the program files?

    Read the article

  • Simple aggregating query very slow in PostgreSql, any way to improve?

    - by Ash
    HI I have a table which holds files and their types such as CREATE TABLE files ( id SERIAL PRIMARY KEY, name VARCHAR(255), filetype VARCHAR(255), ... ); and another table for holding file properties such as CREATE TABLE properties ( id SERIAL PRIMARY KEY, file_id INTEGER CONSTRAINT fk_files REFERENCES files(id), size INTEGER, ... // other property fields ); The file_id field has an index. The file table has around 800k lines, and the properties table around 200k (not all files necessarily have/need a properties). I want to do aggregating queries, for example find the average size and standard deviation for all file types. But it's very slow - around 70 seconds for the latter query. I understand it needs a sequential scan, but still it seems too much. Here's the query SELECT f.filetype, avg(size), stddev(size) FROM files as f, properties as pr WHERE f.id = pr.file_id GROUP BY f.filetype; and the explain HashAggregate (cost=140292.20..140293.94 rows=116 width=13) (actual time=74013.621..74013.954 rows=110 loops=1) -> Hash Join (cost=6780.19..138945.47 rows=179564 width=13) (actual time=1520.104..73156.531 rows=179499 loops=1) Hash Cond: (f.id = pr.file_id) -> Seq Scan on files f (cost=0.00..108365.41 rows=1140941 width=9) (actual time=0.998..62569.628 rows=805270 loops=1) -> Hash (cost=3658.64..3658.64 rows=179564 width=12) (actual time=1131.053..1131.053 rows=179499 loops=1) -> Seq Scan on properties pr (cost=0.00..3658.64 rows=179564 width=12) (actual time=0.753..557.171 rows=179574 loops=1) Total runtime: 74014.520 ms Any ideas why it is so slow/how to make it faster?

    Read the article

  • Performing Inner Join for Multiple Columns in the Same Table

    - by frankiefrank
    I have a scenario which I'm a bit stuck on. Let's say I have a survey about colors, and I have one table for the color data, and another for people's answers. tbColors color_code , color_name 1 , 'blue' 2 , 'green' 3 , 'yellow' 4 , 'red' tbAnswers answer_id , favorite_color , least_favorite_color , color_im_allergic_to 1 , 1 , 2 3 2 , 3 , 1 4 3 , 1 , 1 2 4 , 2 , 3 4 For display I want to write a SELECT that presents the answers table but using the color_name column from tbColors. I understand the "most stupid" way to do it naming tbColors three times in the FROM section, using a different alias for each column to replace. How would a non-stupid way look?

    Read the article

  • 2-column table with two foreign keys. Performance/design question.

    - by Emanuel
    Hello everyone! I recently ran into a quite complex problem and after looking around a lot I couldn't find a solution to it. I've found answers to my questions many times before on stackoverflow.com, so I decided to post here. So I'm making a user/group managment system for a web-based project, and I'm storing all related data into a postgreSQL database. This system relies on three tables: USERS GROUPS GROUP_USERS The two first tables simply define all the users and all the groups on the site, and the last table, GROUP_USERS, stores the groups every user is part of. It only has two columns: USER_ID GROUP_ID Since every user can be a member of several groups, I decided to make a separate table for this purpose, rather than storing a comma separated column in the USERS-table. Now, both columns are foreign keys, and I want to make them both primary keys as well, this since each combination of USER_ID and GROUP_ID has to be unique, and if I give them the constraint UNIQUE pgAdmin tells me that each table should have at least one Primary key. But now I am stuck with what seems to be a lot of indexes and relations to a very small table only containing numbers. In the end, I want this table to be as fast as possible, even if containing tens of thousands of rows. Size on disk shouldn't be a problem since its just all numbers anyway, but it feels quite stupid to have a full-sized index refering to a smaller table. Should I stick with my current solution, store comma-separated values in a column in the USERS-table or is there any other solution I should be aware of. PS. I don't want to use an array-column, even if they are supported by postgreSQL. I want to be as generic as possible so I can switch database later on, if necessary. EDIT: I other words, will using a compound primary key and two foreign keys in one table with only two columns have a negative impact on performance rather than the opposite due to the size of the generated index? Thank you!

    Read the article

  • Optimize SQL query (Facebook-like application)

    - by fabriciols
    My application is similar to Facebook, and I'm trying to optimize the query that get user records. The user records are that he as src ou dst. The src is in usermuralentry directly, the dst list are in usermuralentry_user. So, a entry can have one src and many dst. I have those tables: mysql> desc usermuralentry ; +-----------------+------------------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | user_src_id | int(11) | NO | MUL | NULL | | | private | tinyint(1) | NO | | NULL | | | content | longtext | NO | | NULL | | | date | datetime | NO | | NULL | | | last_update | datetime | NO | | NULL | | +-----------------+------------------+------+-----+---------+----------------+ 10 rows in set (0.10 sec) mysql> desc usermuralentry_user ; +-------------------+---------+------+-----+---------+----------------+ | Field | Type | Null | Key | Default | Extra | +-------------------+---------+------+-----+---------+----------------+ | id | int(11) | NO | PRI | NULL | auto_increment | | usermuralentry_id | int(11) | NO | MUL | NULL | | | userinfo_id | int(11) | NO | MUL | NULL | | +-------------------+---------+------+-----+---------+----------------+ 3 rows in set (0.00 sec) And the following query to retrieve information from two users. mysql> explain SELECT * FROM usermuralentry AS a , usermuralentry_user AS b WHERE a.user_src_id IN ( 1, 2 ) OR ( a.id = b.usermuralentry_id AND b.userinfo_id IN ( 1, 2 ) ); +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ | 1 | SIMPLE | b | ALL | usermuralentry_id,usermuralentry_user_bcd7114e,usermuralentry_user_6b192ca7 | NULL | NULL | NULL | 147188 | | | 1 | SIMPLE | a | ALL | PRIMARY | NULL | NULL | NULL | 1371289 | Range checked for each record (index map: 0x1) | +----+-------------+-------+------+-------------------------------------------------------------------------------------------+------+---------+------+---------+------------------------------------------------+ 2 rows in set (0.00 sec) but it is taking A LOT of time... Some tips to optimize? Can the table schema be better in my application?

    Read the article

  • Is it possible with dynamic TSQL query ?

    - by eugeneK
    I have very long select query which i need to filter based on some params, i'm trying to avoid having different stored procedures or if statements inside of single stored procedure by using partly dynamic TSQL... I will avoid long select just for example sake select a from b where c=@c or d=@d @c and @d are filter params, only one can filter at the same time but also both filters could be disabled. 0 for each of these means param is disables so i can create nvarchar with where statement in it... How do i integrate in here dynamic query so 'where' can be added to normal query. I cannot add all the query as big nvarchar because there is too many things in it which will require changes ( ie. when's, subqueries, joins)

    Read the article

< Previous Page | 744 745 746 747 748 749 750 751 752 753 754 755  | Next Page >