Search Results

Search found 8320 results on 333 pages for 'tables'.

Page 300/333 | < Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >

  • Looping Through Database Query

    - by DrakeNET
    I am creating a very simple script. The purpose of the script is to pull a question from the database and display all answers associated with that particular question. We are dealing with two tables here and there is a foreign key from the question database to the answer database so answers are associated with questions. Hope that is enough explanation. Here is my code. I was wondering if this is the most efficient way to complete this or is there an easier way? <html> <head> <title>Advise Me</title> <head> <body> <h1>Today's Question</h1> <?php //Establish connection to database require_once('config.php'); require_once('open_connection.php'); //Pull the "active" question from the database $todays_question = mysql_query("SELECT name, question FROM approvedQuestions WHERE status = active") or die(mysql_error()); //Variable to hold $todays_question aQID $questionID = mysql_query("SELECT commentID FROM approvedQuestions WHERE status = active") or die(mysql_error()); //Print today's question echo $todays_question; //Print comments associated with today's question $sql = "SELECT commentID FROM approvedQuestions WHERE status = active"; $result_set = mysql_query($sql); $result_num = mysql_numrows($result_set); for ($a = 0; $a < $result_num; $a++) { echo $sql; } ?> </body> </html>

    Read the article

  • table column accepting "0" as a member Id

    - by user682417
    I have two tables one is members table with columns member id , member first name, member last name. I have another table guest passes with columns guest pass id and member id and issue date . I have a list view that will displays guest passes details (I.e) like member name and issue date and I have two text boxes those are for entering member name and issue date . member name text box is auto complete text box that working fine.... but the problem is when I am entering the name that is not in member table at this time it will accept and displays a blank field in list view in member name column and member id is stored as "0" in guest pass table ...... I don't want to display the member name empty blank and I don t want to store "0" in guest pass table and this is the insert statement sql2 = @"INSERT INTO guestpasses(member_Id,guestPass_IssueDate)"; sql2 += " VALUES("; sql2 += "'" + tbCGuestPassesMemberId.Text + "'"; sql2 += ",'" + tbIssueDate.Text + "'"; guestpassmemberId = memberid is there any validation that need to be done can any one suggestions on this pls... and this is the auto complete text box statement sql = @"SELECT member_Id FROM members WHERE concat(member_Firstname,'',member_Lastname) ='" + tbMemberName.Text+"'"; if (dt != null) { if (dt.Rows.Count > 0) { tbCGuestPassesMemberId.Text = Convert.ToInt32(dt.Rows[0] ["member_Id"]).ToString(); } } can any one help me on this ... is there any type of validation with sql query pls help me .....

    Read the article

  • Data mixing SQL Server

    - by Pythonizo
    I have three tables and a range of two dates: Services ServicesClients ServicesClientsDone @StartDate @EndDate Services: ID | Name 1 | Supervisor 2 | Monitor 3 | Manufacturer ServicesClients: IDServiceClient | IDClient | IDService 1 | 1 | 1 2 | 1 | 2 3 | 2 | 2 4 | 2 | 3 ServicesClientsDone: IDServiceClient | Period 1 | 201208 3 | 201210 Period = YYYYMM I need to insert into ServicesClientsDone the months range from @StartDate up @EndDate. I have also a temporary table (#Periods) with the following list: Period 201208 201209 201210 The query I need is to give me back the following list: IDServiceClient | Period 1 | 201209 1 | 201210 2 | 201208 2 | 201209 2 | 201210 3 | 201208 3 | 201209 4 | 201208 4 | 201209 4 | 201210 Which are client services but the ranks of the temporary table, not those who are already inserted This is what i have: Table periods: DECLARE @i int DECLARE @mm int DECLARE @yyyy int, DECLARE @StartDate datetime DECLARE @EndDate datetime set @EndDate = (SELECT GETDATE()) set @StartDate = (SELECT DATEADD(MONTH, -3,GETDATE())) CREATE TABLE #Periods (Period int) set @i = 0 WHILE @i <= DATEDIFF(MONTH, @StartDate , @EndDate ) BEGIN SET @mm= DATEPART(MONTH, DATEADD(MONTH, @i, @FechaInicio)) SET @yyyy= DATEPART(YEAR, DATEADD(MONTH, @i, @FechaInicio)) INSERT INTO #Periods (Period) VALUES (CAST(@yyyy as varchar(4)) + RIGHT('00'+CONVERT(varchar(6), @mm), 2)) SET @i = @i + 1; END Relation between ServicesClients and Services: SELECT s.Name, sc.IDClient FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID Services already done and when: SELECT s.Name, scd.Period FROM Services JOIN ServicesClients AS sc ON sc.IDService = s.ID JOIN ServicesClientsDone AS scd ON scd.IDServiceClient = sc.IDServiceClient

    Read the article

  • "Add another item" form functionality

    - by GSTAR
    I have a form that lets a user enter their career history - it's a very simple form with only 3 fields - type (dropdown), details (textfield) and year (dropdown). Basically I want to include some dynamic functionality whereby the user can enter multiple items on the same page and then submit them all in one go. I had a search on Google and found some examples but they were all based on tables - my markup is based on DIV tags: <div class="form-fields"> <div class="row"> <label for="type">Type</label> <select id="type" name="type"> <option value="Work">Work</option> </select> </div> <div class="row"> <label for="details">Details</label> <input id="details" type="text" name="details" /> </div> <div class="row"> <label for="year">Year</label> <select id="year" name="year"> <option value="2010">2010</option> </select> </div> </div> So basically the 3 DIV tags with class "row" need to be duplicated, or to simplify things - the div "form-fields" could just be duplicated. I am also aware that the input names would have to converted to array format. Additionally each item will require a "remove" button. There will be a main submit button at the bottom which submits all the data. Anyone got an elegant solution for this?

    Read the article

  • Oracle Query Optimization: Why is My Second Query Faster?

    - by Patrick Cuff
    I was having some performance issues with an Oracle query, so I downloaded a trial of the Quest SQL Optimizer for Oracle, which made some changes that dramatically improved the query's performance. I'm not exactly sure why the recommended query had such an improvement; can anyone provide an explanation? Before: SELECT t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table1 t1, table2 t2, table3 t3 WHERE t1.id = t2.id AND t1.version_id = t2.version_id AND t2.id = 123 AND t1.version_id = t3.version_id AND t1.VERSION_NAME <> 'AA' order by t1.id Plan Cost: 831 Elapsed Time: 00:00:21.40 Number of Records: 40,717 After: SELECT /*+ USE_NL_WITH_INDEX(t1) */ t1.version_id, t1.id, t2.field1, t3.person_id, t2.id FROM table2 t2, table3 t3, table1 t1 WHERE t1.id = t2.id + 0 AND t1.version_id = t2.version_id + 0 AND t2.id = 123 AND t1.version_id = t3.version_id + 0 AND t1.VERSION_NAME || '' <> 'AA' AND t3.version_id = t2.version_id + 0 order by t1.id Plan Cost: 686 Elapsed Time: 00:00:00.95 Number of Records: 40,717 Questions: Why does re-arranging the order of the tables in the FROM clause help? Why does adding + 0 to the WHERE clause comparisons help? Why does || '' <> 'AA' in the WHERE clause VERSION_NAME comparison help? Is this a more efficient way of handling possible nulls on this column?

    Read the article

  • Stored Procedure, 'incorrect syntax error'

    - by jacksonSD
    Attempting to figure out sp's, and I'm getting this error: "Msg 156, Level 15, State 1, Line 5 Incorrect syntax near the keyword 'Procedure'." the error seems to be on the if, but I can drop other existing tables with stored procedures the exact same way so I'm not clear on why this isn't working. can anyone shed some light? Begin Set nocount on Begin Try Create Procedure uspRecycle as if OBJECT_ID('Recycle') is not null Drop Table Recycle create table Recycle (RecycleID integer constraint PK_integer primary key, RecycleType nchar(10) not null, RecycleDescription nvarchar(100) null) insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('1','Compost','Product is compostable, instructions included in packaging') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('2','Return','Product is returnable to company for 100% reuse') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('3','Scrap','Product is returnable and will be reclaimed and reprocessed') insert into Recycle (RecycleID,RecycleType,RecycleDescription) values ('4','None','Product is not recycleable') End Try Begin Catch DECLARE @ErrMsg nvarchar(4000); SELECT @ErrMsg = ERROR_MESSAGE(); Throw 50001, @ErrMsg, 1; End Catch -- checking to see if table exists and is loaded: If (Select count(*) from Recycle) >1 begin Print 'Recycle table created and loaded '; Print getdate() End set nocount off End

    Read the article

  • In sync query calls, one query causing other query to run slower. Why?

    - by Irchi
    Sorry for the long question, but I think this is an interesting situation and I couldn't find any explanations for it: I was involved in optimization of an application that performed a large number of sequential SELECT and INSERT statements on a single dedicated SQL Server database. The process needs to INSERT a large number of records into a table, but for each of them there should be some value mappings, which performed using SELECT statements on another table in the same database. For a specific execution, it took 90 minutes to run. I used a profiler (JProfiler - the application is Java-based) to determine how much time does each part of the application take. It yields that 60% of the time was spent on INSERT method calls, and almost 20% on SELECT calls (the rest distributed in other parts). After some trials, I came to this situation: I commented out the INSERT query that took 60% of the time. I was expecting for the total run time to be around 35 minutes, as I have removed 60% of the 90 minutes. But the whole process took the same 90 minutes (doing only SELECTs and nothing else), but each SELECT took longer this time! Everything was running sync, there were no async calls. And there was only one single thread of execution. SELECT and INSERT queries are very simple, and don't have anything special, and they are on different tables, but on the same DB. I tested with both the DB on the application machine, and on a remote network machine. I can't think of any explanation for this, as the Profiler (Application profiler, not SQL Profiler) reported the changes in the method call times, and by removing INSERT statements SELECT statements took longer to run. Can anyone give me some kind of explanation of what could have happened? (there can't be cache / query optimization stuff, because the queries were run in sync, and in a single thread, and it was far from affecting the cache this much) I should note that the bottleneck of the speed was in SQL server, using most of the CPU time.

    Read the article

  • How to document an existing small web site (web application), inside and out?

    - by Ricket
    We have a "web application" which has been developed over the past 7 months. The problem is, it was not really documented. The requirements consisted of a small bulleted list from the initial meeting 7 months ago (it's more of a "goals" statement than software requirements). It has collected a number of features which stemmed from small verbal or chat discussions. The developer is leaving very soon. He wrote the entire thing himself and he knows all of the quirks and underlying rules to each page, but nobody else really knows much more than the user interface side of it; which of course is the easy part, as it's made to be intuitive to the user. But if someone needs to repair or add a feature to it, the entire thing is a black box. The code has some minimal comments, and of course the good thing about web applications is that the address bar points you in the right direction towards fixing a problem or upgrading a page. But how should the developer go about documenting this web application? He is a bit lost as far as where to begin. As developers, how do you completely document your web applications for other developers, maintainers, and administrative-level users? What approach do you use, where do you start, do you have a template? An idea of magnitude: it uses PHP, MySQL and jQuery. It has about 20-30 main (frontend) files, along with about 15 included files and a couple folders of some assets. So overall it's a pretty small application. It interfaces with 7 MySQL tables, each one well-named, so I think the database end is pretty self-explanatory. There is a config.inc.php file with definitions of consts like the MySQL user details, some from/to emails, and URLs which PHP uses to insert into emails and pages (relative and absolute paths, basiecally). There is some AJAX via jQuery. Please comment if there is any other information that would help you help me and I will be glad to edit it in.

    Read the article

  • Mysql query, need suggestion or solution

    - by Xi Kam
    Can anyone help me, i have two tables and i need records from both the table //////////////////////////////++ Query 1 ++//////////////////////////////////// SELECT SUM(rec_issued) AS issed, regen_id, YEAR(issue_date) AS iYear, MONTH(issue_date) AS iMonth FROM `view_rec_issued` WHERE `regen_id` = 2 GROUP BY YEAR(issue_date) DESC, MONTH(issue_date) DESC ORDER BY issue_date ASC issed regen_id iYear iMonth 424 2 2011 3 4340 2 2011 4 4235 2 2011 5 10570 2 2012 2 4761 2 2012 3 5000 2 2012 4 3700 2 2012 5 3414 2 2012 6 3700 2 2012 7 2992 2 2012 8 995 2 2012 10 ![Result from Query 1][1] //////////////////////////////++ Query 2 ++//////////////////////////////////// SELECT SUM(total_redem) AS redemed, regen_id, YEAR(redemption_date) AS rYear, MONTH(redemption_date) AS rMonth FROM `recredem_month_wise` WHERE `regen_id` = 2 GROUP BY YEAR(redemption_date) DESC, MONTH(redemption_date) DESC order by redemption_date ASC redemed regen_id rYear rMonth 424 2 2011 3 260 2 2011 4 6523 2 2011 5 1070 2 2011 6 200 2 2011 10 500 2 2011 11 9750 2 2012 2 5000 2 2012 3 5500 2 2012 4 3803 2 2012 5 3700 2 2012 7 3000 2 2012 8 ![Result from Query 2][2] But i want it as - issed regen_id iYear iMonth redemed regen_id rYear rMonth 424 2 2011 3 424 2 2011 3 4340 2 2011 4 260 2 2011 4 4235 2 2011 5 6523 2 2011 5 NULL NULL NULL NULL 1070 2 2011 6 NULL NULL NULL NULL 200 2 2011 10 NULL NULL NULL NULL 500 2 2011 11 10570 2 2012 2 9750 2 2012 2 4761 2 2012 3 5000 2 2012 3 5000 2 2012 4 5500 2 2012 4 3700 2 2012 5 3803 2 2012 5 3414 2 2012 6 NULL NULL NULL NULL 3700 2 2012 7 3700 2 2012 7 2992 2 2012 8 3000 2 2012 8 995 2 2012 10 NULL NULL NULL NULL ![I want this output][3] In these table regen_id is unique and i need data as YEAR and MONTH, if in any table not have the records in perticular month and year it should retrieve zero or null. But in every record year and month should equal like this - iYear = rYear and iMonth = rMonth So we can merge both the fields - No need to show year and month twice iYear and rYear = year iMonth and rMonth = month Thank You Please look at this problem.

    Read the article

  • MySQL Multiple Table Join

    - by hitman001
    I have a 3 tables that I'm trying to join and get distinct results. CREATE TABLE `car` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`) ) ENGINE=InnoDB mysql> select * from car; +----+-------+ | id | name | +----+-------+ | 1 | acura | +----+-------+ CREATE TABLE `tires` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `tire_desc` varchar(255) DEFAULT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new_fk_constraint` (`car_id`), CONSTRAINT `new_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from tires; +----+-------------+--------+ | id | tire_desc | car_id | +----+-------------+--------+ | 1 | front_right | 1 | | 2 | front_left | 1 | +----+-------------+--------+ CREATE TABLE `lights` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `lights_desc` varchar(255) NOT NULL, `car_id` int(10) unsigned NOT NULL, PRIMARY KEY (`id`), KEY `new1_fk_constraint` (`car_id`), CONSTRAINT `new1_fk_constraint` FOREIGN KEY (`car_id`) REFERENCES `car` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB mysql> select * from lights; +----+-------------+--------+ | id | lights_desc | car_id | +----+-------------+--------+ | 1 | right_light | 1 | | 2 | left_light | 1 | +----+-------------+--------+ Here is my query. mysql> SELECT name, group_concat(tire_desc), group_concat(lights_desc) FROM car left join tires on car.id = tires.car_id left join lights on car.id = car_id group by car.id; +-------+-----------------------------------------------+-----------------------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+-----------------------------------------------+ | acura | front_right,front_right,front_left,front_left | right_light,left_light,right_light,left_light | +-------+-----------------------------------------------+-----------------------------------------------+ I get duplicate entires and this is what I would like to get. +-------+-----------------------------------------------+--------------------------------+ | name | group_concat(tire_desc) | group_concat(lights_desc) | +-------+-----------------------------------------------+--------------------------------+ | acura | front_right,front_left | right_light,left_light | +-------+-----------------------------------------------+--------------------------------+ I cannot use distinct in group_concat because I might have legitimate duplicates which I would like to keep. Is there any way to do this query using joins and not using inner selects like the statement below? SELECT name, (select group_concat(tire_desc) from tires where car.id = tires.car_id), (select group_concat(lights_desc) from lights where car.id = lights.car_id) FROM car Also, if I will use inner selects, will there be any performance issues over joins?

    Read the article

  • How to efficiently use LOCK_ESCALATION mssql 2008

    - by Avias
    I'm currently having troubles with frequent deadlocks with a specific user table in MS SQL 2008. Here are some facts about this particular table: Has a large amount of rows (1 to 2 million) All the indexes used on this table only has "use row lock" ticked on its option rows are frequently updated by multiple transactions but are unique (e.g. probably a thousand or more update statements are executed to different unique rows every hour) the table does not use partitions. Upon checking the table on sys.tables, I found that the lock_escalation is set to TABLE I'm very tempted to turn the lock_escalation for this table to DISABLE but I'm not really sure what side effect this would incur. From What I understand, using DISABLE will minimize escalating locks to TABLE level which if combined with the row lock settings of the indexes should theoretically minimize the deadlocks I am encountering.. From what I have read in Determining threshold for lock escalation it seems that locking automatically escalates when a single transaction fetches 5000 rows.. What does a single transaction mean in this sense? A single session/connection getting 5000 rows thru individual update/select statements? Or is it a single sql update/select statement that fetches 5000 or more rows? Any insight is appreciated, btw, n00b DBA here Thanks

    Read the article

  • How to emulate a BEFORE DELETE trigger in SQL Server 2005

    - by Mark
    Let's say I have three tables, [ONE], [ONE_TWO], and [TWO]. [ONE_TWO] is a many-to-many join table with only [ONE_ID and [TWO_ID] columns. There are foreign keys set up to link [ONE] to [ONE_TWO] and [TWO] to [ONE_TWO]. The FKs use the ON DELETE CASCADE option so that if either a [ONE] or [TWO] record is deleted, the associated [ONE_TWO] records will be automatically deleted as well. I want to have a trigger on the [TWO] table such that when a [TWO] record is deleted, it executes a stored procedure that takes a [ONE_ID] as a parameter, passing the [ONE_ID] values that were linked to the [TWO_ID] before the delete occurred: DECLARE @Statement NVARCHAR(max) SET @Statement = '' SELECT @Statement = @Statement + N'EXEC [MyProc] ''' + CAST([one_two].[one_id] AS VARCHAR(36)) + '''; ' FROM deleted JOIN [one_two] ON deleted.[two_id] = [one_two].[two_id] EXEC (@Statement) Clearly, I need a BEFORE DELETE trigger, but there is no such thing in SQL Server 2005. I can't use an INSTEAD OF trigger because of the cascading FK. I get the impression that if I use a FOR DELETE trigger, when I join [deleted] to [ONE_TWO] to find the list of [ONE_ID] values, the FK cascade will have already deleted the associated [ONE_TWO] records so I will never find any [ONE_ID] values. Is this true? If so, how can I achieve my objective? I'm thinking that I'd need to change the FK joining [TWO] to [ONE_TWO] to not use cascades and to do the delete from [ONE_TWO] manually in the trigger just before I manually delete the [TWO] records. But I'd rather not go through all that if there is a simpler way.

    Read the article

  • Prototype or jQuery for DOM manipulation (client-side dynamic content)

    - by luiggitama
    I need to know which of these two JavaScript frameworks is better for client-side dynamic content modification for known DOM elements (by id), in terms of performance, memory usage, etc.: Prototype's $('id').update(content) jQuery's jQuery('#id').html(content) BTW, both libraries coexist with no conflict in my app, because I'm using RichFaces for JSF development, that's why I can use "jQuery" instead of "$". I have at least 20 updatable areas in my page, and for each one I prepare content (tables, option lists, etc.), based on some user-defined client-side criteria filtering or some AJAX event, etc., like this: var html = []; int idx = 0; ... html[idx++] = '<tr><td class="cell"><span class="link" title="View" onclick="myFunction('; html[idx++] = param; html[idx++] = ')"></span>'; html[idx++] = someText; html[idx++] = '</td></tr>'; ... So here comes the question, which is better to use: // Prototype's $('myId').update(html.join('')); // or jQuery's jQuery('#myId').html(html.join('')); Other needed functions are hide() and show(), which are present in both frameworks. Which is better? Also I'm needing to enable/disable form controls, and to read/set their values. Note that I know my updatable area's id (I don't need CSS selectors at this point). And I must tell that I'm saving these queried objects in some data structure for later use, so they are requested just once when the page is rendered, like this: MyData = {div1:jQuery('#id1'), div2:$('id2'), ...}; ... div1.update('content 1'); div2.html('content 2'); So, which is the best practice?

    Read the article

  • MYSQL how to sum rows with same key, then delete the duplicate rows

    - by Bhante-S
    What I have: key data 1      22 1       5 2       6 3       1 3      -3 What I want: key data 1      27 2       6 3      -2 I don’t mind doing this with two or more queries, esp. if they are simple--makes for easier maintenance. Also the tables are fairly small (<2,000 records). The ‘key’ field is indexed and allows duplicates. Muchas Gracias

    Read the article

  • Designing a database for a user/points system? (in Django)

    - by AP257
    First of all, sorry if this isn't an appropriate question for StackOverflow. I've tried to make it as generalisable as possible. I want to create a database (MySQL, site running Django) that has users, who can be allocated a certain number of points for various types of action - it's a collaborative game. My requirements are to obtain: the number of points a user has the user's ranking compared to all other users and the overall leaderboard (i.e. all users ranked in order of points) This is what I have so far, in my Django models.py file: class SiteUser(models.Model): name = models.CharField(max_length=250 ) email = models.EmailField(max_length=250 ) date_added = models.DateTimeField(auto_now_add=True) def points_total(self): points_added = PointsAdded.objects.filter(user=self) points_total = 0 for point in points_added: points_total += point.points return points_total class PointsAdded(models.Model): user = models.ForeignKey('SiteUser') action = models.ForeignKey('Action') date_added = models.DateTimeField(auto_now_add=True) def points(self): points = Action.objects.filter(action=self.action) return points class Action(models.Model): points = models.IntegerField() action = models.CharField(max_length=36) However it's rapidly becoming clear to me that it's actually quite complex (in Django query terms at least) to figure out the user's ranking and return the leaderboard of users. At least, I'm finding it tough. Is there a more elegant way to do something like this? This question seems to suggest that I shouldn't even have a separate points table - what do people think? It feels more robust to have separate tables, but I don't have much experience of database design.

    Read the article

  • please suggest mysql query for this

    - by I Like PHP
    I HAVE TWO TABLES shown below table_joining id join_id(PK) transfer_id(FK) unit_id transfer_date joining_date 1 j_1 t_1 u_1 2010-06-05 2010-03-05 2 j_2 t_2 u_3 2010-05-10 2010-03-10 3 j_3 t_3 u_6 2010-04-10 2010-01-01 4 j_5 NULL u_3 NULL 2010-06-05 5 j_6 NULL u_4 NULL 2010-05-05 table_transfer id transfer_id(PK) pastUnitId futureUnitId effective_transfer_date 1 t_1 u_3 u_1 2010-06-05 2 t_2 u_6 u_1 2010-05-10 3 t_3 u_5 u_3 2010-04-10 now i want to know total employee detalis( using join_id) which are currently working on unit u_3 . means i want only join_id j_1 (has transfered but effective_transfer_date is future date, right now in u_3) j_2 ( tansfered and right now in `u_3` bcoz effective_transfer_date has been passed) j_6 ( right now in `u_3` and never transfered) what i need to take care of below steps( as far as i know ) <1> first need to check from table_joining whether transfer_id is NULL or not <2> if transfer_id= is NULL then see unit_id=u_3 where joining_date <=CURDATE() ( means that person already joined u_3) <3> if transfer_id is NOT NULL then go to table_transfer using transfer_id (foreign key reference) <4> now see the effective_transfer_date regrading that transfer_id whether effective_transfer_date<=CURDATE() <5> if transfer date has been passed(means transfer has been done) then return futureUnitID otherwise return pastUnitID i used two separate query but don't know how to join those query?? for step <1 ans <2 SELECT unit_id FROM table_joining WHERE joining_date<=CURDATE() AND transfer_id IS NULL AND unit_id='u_3' for step<5 SELECT IF(effective_transfer_date <= CURDATE(),futureUnitId,pastUnitId) AS currentUnitID FROM table_transfer // here how do we select only those rows which have currentUnitID='u_3' ?? please guide me the process?? i m just confused with JOINS. i think using LEFT JOIN can return the data i need, or if we use subquery value to main query? but i m not getting how to implement ...please help me. Thanks for helping me alwayz

    Read the article

  • Hibernate OneToMany and ManyToOne confusion! Null List!

    - by squizz
    I have two tables... For example - Company and Employee (let's keep this real simple) Company( id, name ); Employee( id, company_id ); Employee.company_id is a foreign key. My entity model looks like this... Employee @ManyToOne(cascade = CascadeType.PERSIST) @JoinColumn(name = "company_id") Company company; Company @OneToMany(cascade = CascadeType.ALL, fetch = FetchType.EAGER) @JoinColumn(name = "company_id") List<Employee> employeeList = new ArrayList<Employee>(); So, yeah I want a list of employees for a company. When I do the following... Employee e = new Employee(); e.setCompany(c); //c is an Company that is already in the database. DAO.insertEmployee(e); //this works fine! If I then get my Company object it's list is empty! Ive tried endless different ways from the Hibernate documentation! Obviously not tried the correct one yet! I just want the list to be populated for me or find out a sensible alternative. Help would be greatly appreciated, thanks!

    Read the article

  • Transfering data from Excel to dataGridView

    - by Panecillo
    I have a problem when I want to transfer data from Excel to dataGridView in C#. My Excel's column has numeric and alphanumeric values. But for example, if the column has 3 numbers and 2 alphanumeric values then only the numbers are shown in the dataGridView, and vice versa. Why aren't all the values shown? The next is what happen: Excel's Column: DataGridView's Column: 45654 45654 P745K 31233 31233 23111 23111 45X2Y Here is my code to load the dataGridView: string connectionString = @"Provider=Microsoft.Jet.OLEDB.4.0;Data Source=D:\test.xls;Extended Properties=""Excel 8.0;HDR=YES;"""; DbProviderFactory factory = DbProviderFactories.GetFactory("System.Data.OleDb"); DbDataAdapter adapter = factory.CreateDataAdapter(); DbCommand selectCommand = factory.CreateCommand(); selectCommand.CommandText = "SELECT * FROM [sheet1$]"; DbConnection connection = factory.CreateConnection(); connection.ConnectionString = connectionString; selectCommand.Connection = connection; adapter.SelectCommand = selectCommand; data = new DataSet(); adapter.Fill(data); dataGridView1.DataSource = data.Tables[0].DefaultView; I hope I explained it well. Sorry my bad english. Thanks.

    Read the article

  • Separating code logic from the actual data structures. Best practices?

    - by Patrick
    I have an application that loads lots of data into memory (this is because it needs to perform some mathematical simulation on big data sets). This data comes from several database tables, that all refer to each other. The consistency rules on the data are rather complex, and looking up all the relevant data requires quite some hashes and other additional data structures on the data. Problem is that this data may also be changed interactively by the user in a dialog. When the user presses the OK button, I want to perform all the checks to see that he didn't introduce inconsistencies in the data. In practice all the data needs to be checked at once, so I cannot update my data set incrementally and perform the checks one by one. However, all the checking code work on the actual data set loaded in memory, and use the hashing and other data structures. This means I have to do the following: Take the user's changes from the dialog Apply them to the big data set Perform the checks on the big data set Undo all the changes if the checks fail I don't like this solution since other threads are also continuously using the data set, and I don't want to halt them while performing the checks. Also, the undo means that the old situation needs to be put aside, which is also not possible. An alternative is to separate the checking code from the data set (and let it work on explicitly given data, e.g. coming from the dialog) but this means that the checking code cannot use hashing and other additional data structures, because they only work on the big data set, making the checks much slower. What is a good practice to check user's changes on complex data before applying them to the 'application's' data set?

    Read the article

  • SQL Server 2005 FREETEXT() Perfomance Issue

    - by Zenon
    I have a query with about 6-7 joined tables and a FREETEXT() predicate on 6 columns of the base table in the where. Now, this query worked fine (in under 2 seconds) for the last year and practically remained unchanged (i tried old versions and the problem persists) So today, all of a sudden, the same query takes around 1-1.5 minutes. After checking the Execution Plan in SQL Server 2005, rebuilding the FULLTEXT Index of that table, reorganising the FULLTEXT index, creating the index from scratch, restarting the SQL Server Service, restarting the whole server I don't know what else to try. I temporarily switched the query to use LIKE instead until i figure this out (which takes about 6 seconds now). When I look at the query in the query performance analyser, when I compare the ´FREETEXT´query with the ´LIKE´ query, the former has 350 times as many reads (4921261 vs. 13943) and 20 times (38937 vs. 1938) the CPU usage of the latter. So it really is the ´FREETEXT´predicate that causes it to be so slow. Has anyone got any ideas on what the reason might be? Or further tests I could do?

    Read the article

  • Zend Metadata Cache in file

    - by Matthieu
    I set up a metadata cache in Zend Framework because a lot of DESCRIBE queries were executed and it affected the performances. $frontendOptions = array ('automatic_serialization' => true); $backendOptions = array ('cache_dir' => CACHE_PATH . '/db-tables-metadata'); $cache = Zend_Cache::factory( 'Core', 'File', $frontendOptions, $backendOptions ); Zend_Db_Table::setDefaultMetadataCache($cache); I can indeed see the cache files created, and the website works great. However, when I launch unit tests, or a script of the same application that perform DB queries, I end up with an error because Zend couldn't read the cache files. This is because in the website, the cache files are created by the www user, and when I run phpunit or a script, it tries to read them with my user and it fails. Do you see any solution to that? I have some quickfix ideas but I'm looking for a good/stable solution. And I'd rather avoid running phpunit or the scripts as www if possible (for practical reasons).

    Read the article

  • Programming Practice

    - by deepti
    public DataTable UserUpdateTempSettings(int install_id, int install_map_id, string Setting_value,string LogFile) { SqlConnection oConnection = new SqlConnection(sConnectionString); DataSet oDataset = new DataSet(); DataTable oDatatable = new DataTable(); SqlDataAdapter MyDataAdapter = new SqlDataAdapter(); try { oConnection.Open(); cmd = new SqlCommand("SP_HOTDOC_PRINTTEMPLATE_PERMISSION", oConnection); cmd.Parameters.Add(new SqlParameter ("@INSTALL_ID", install_id)); cmd.Parameters.Add(new SqlParameter ("@INSTALL_MAP_ID", install_map_id)); cmd.Parameters.Add(new SqlParameter("@SETTING_VALUE", Setting_value)); if (LogFile != "") { cmd.Parameters.Add(new SqlParameter("@LOGFILE",LogFile)); } cmd.CommandType = CommandType.StoredProcedure; MyDataAdapter.SelectCommand = cmd; cmd.ExecuteNonQuery(); MyDataAdapter.Fill(oDataset); oDatatable = oDataset.Tables[0]; return oDatatable; } catch (Exception ex) { Utils.ShowError(ex.Message); return oDatatable; } finally { if ((oConnection.State != ConnectionState.Closed) || (oConnection.State != ConnectionState.Broken)) { oConnection.Close(); } oDataset = null; oDatatable = null; oConnection.Dispose(); oConnection = null; } } i have used execute non query.. normally its not used with data adapter... if iam not using its giving me error.. is it bad programming practice to use execute non query with data adapter

    Read the article

  • Automatically Persisting a Complex Java Object

    - by VeeArr
    For a project I am working on, I need to persist a number of POJOs to a database. The POJOs class definitions are sometimes highly nested, but they should flatten okay, as the nesting is tree-like and contains no cycles (and the base elements are eventually primitives/Strings). It is preferred that the solution used create one table per data type and that the tables will have one field per primitive member in the POJO. Subclassing and similar problems are not issues for this particular project. Does anybody know of any existing solutions that can: Automatically generate a CREATE TABLE definition from the class definition Automatically generate a query to persist an object to the database, given an instance of the object Automatically generate a query to retrieve an object from the database and return it as a POJO, given a key. Solutions that can do this with minimum modifications/annotions to the class files and minimum external configuration are preferred. Example: Java classes //Class to be persisted class TypeA { String guid; long timestamp; TypeB data1; TypeC data2; } class TypeB { int id; int someData; } class TypeC { int id; int otherData; } Could map to CREATE TABLE TypeA ( guid CHAR(255), timestamp BIGINT, data1_id INT, data1_someData INT, data2_id INt, data2_otherData INT ); Or something similar.

    Read the article

  • EF Query with conditional include that uses Joins

    - by makerofthings7
    This is a follow up to another user's question. I have 5 tables CompanyDetail CompanyContacts FK to CompanyDetail CompanyContactsSecurity FK to CompanyContact UserDetail UserGroupMembership FK to UserDetail How do I return all companies and include the contacts in the same query? I would like to include companies that contain zero contacts. Companies have a 1 to many association to Contacts, however not every user is permitted to see every Contact. My goal is to get a list of every Company regardless of the count of Contacts, but include contact data. Right now I have this working query: var userGroupsQueryable = _entities.UserGroupMembership .Where(ug => ug.UserID == UserID) .Select(a => a.GroupMembership); var contactsGroupsQueryable = _entities.CompanyContactsSecurity;//.Where(c => c.CompanyID == companyID); /// OLD Query that shows permitted contacts /// ... I want to "use this query inside "listOfCompany" /// //var permittedContacts= from c in userGroupsQueryable //join p in contactsGroupsQueryable on c equals p.GroupID //select p; However this is inefficient when I need to get all contacts for all companies, since I use a For..Each loop and query each company individually and update my viewmodel. Question: How do I shoehorn the permittedContacts variable above and insert that into this query: var listOfCompany = from company in _entities.CompanyDetail.Include("CompanyContacts").Include("CompanyContactsSecurity") where company.CompanyContacts.Any( // Insert Query here.... // b => b.CompanyContactsSecurity.Join(/*inner*/,/*OuterKey*/,/*innerKey*/,/*ResultSelector*/) ) select company; My attempt at doing this resulted in: var listOfCompany = from company in _entities.CompanyDetail.Include("CompanyContacts").Include("CompanyContactsSecurity") where company.CompanyContacts.Any( // This is concept only... doesn't work... from grps in userGroupsQueryable join p in company.CompanyContactsSecurity on grps equals p.GroupID select p ) select company;

    Read the article

  • Rails active record association problem

    - by Harm de Wit
    Hello, I'm new at active record association in rails so i don't know how to solve the following problem: I have a tables called 'meetings' and 'users'. I have correctly associated these two together by making a table 'participants' and set the following association statements: class Meeting < ActiveRecord::Base has_many :participants, :dependent => :destroy has_many :users, :through => :participants and class Participant < ActiveRecord::Base belongs_to :meeting belongs_to :user and the last model class User < ActiveRecord::Base has_many :participants, :dependent => :destroy At this point all is going well and i can access the user values of attending participants of a specific meeting by calling @meeting.users in the normal meetingshow.html.erb view. Now i want to make connections between these participants. Therefore i made a model called 'connections' and created the columns of 'meeting_id', 'user_id' and 'connected_user_id'. So these connections are kinda like friendships within a certain meeting. My question is: How can i set the model associations so i can easily control these connections? I would like to see a solution where i could use @meeting.users.each do |user| user.connections.each do |c| <do something> end end I tried this by changing the model of meetings to this: class Meeting < ActiveRecord::Base has_many :participants, :dependent => :destroy has_many :users, :through => :participants has_many :connections, :dependent => :destroy has_many :participating_user_connections, :through => :connections, :source => :user Please, does anyone have a solution/tip how to solve this the rails way?

    Read the article

< Previous Page | 296 297 298 299 300 301 302 303 304 305 306 307  | Next Page >