Search Results

Search found 8603 results on 345 pages for 'altering tables'.

Page 284/345 | < Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >

  • Parsing/Tokenizing a String Containing a SQL Command

    - by Alan Storm
    Are there any open source libraries (any language, python/PHP preferred) that will tokenize/parse an ANSI SQL string into its various components? That is, if I had the following string SELECT a.foo, b.baz, a.bar FROM TABLE_A a LEFT JOIN TABLE_B b ON a.id = b.id WHERE baz = 'snafu'; I'd get back a data structure/object something like //fake PHPish $results['select-columns'] = Array[a.foo,b.baz,a.bar]; $results['tables'] = Array[TABLE_A,TABLE_B]; $results['table-aliases'] = Array[a=TABLE_A, b=TABLE_B]; //etc... Restated, I'm looking for the code in a database package that teases the SQL command apart so that the engine knows what to do with it. Searching the internet turns up a lot of results on how to parse a string WITH SQL. That's not what I want. I realize I could glop through an open source database's code to find what I want, but I was hoping for something a little more ready made, (although if you know where in the MySQL, PostgreSQL, SQLite source to look, feel free to pass it along) Thanks!

    Read the article

  • Building a many-to-many db schema using only an unpredictable number of foreign keys

    - by user1449855
    Good afternoon (at least around here), I have a many-to-many relationship schema that I'm having trouble building. The main problem is that I'm only working with primary and foreign keys (no varchars or enums to simplify things) and the number of many-to-many relationships is not predictable and can increase at any time. I looked around at various questions and couldn't find something that directly addressed this issue. I split the problem in half, so I now have two one-to-many schemas. One is solved but the other is giving me fits. Let's assume table FOO is a standard, boring table that has a simple primary key. It's the one in the one-to-many relationship. Table BAR can relate to multiple keys of FOO. The number of related keys is not known beforehand. An example: From a query FOO returns ids 3, 4, 5. BAR needs a unique key that relates to 3, 4, 5 (though there could be any number of ids returned) The usual join table does not work: Table FOO_BAR primary_key | foo_id | bar_id | Since FOO returns 3 unique keys and here bar_id has a one-to-one relationship with foo_id. Having two join tables does not seem to work either, as it still can't map foo_ids 3, 4, 5 to a single bar_id. Table FOO_TO_BAR primary_key | foo_id | bar_to_foo_id | Table BAR_TO_FOO primary_key | foo_to_bar_id | bar_id | What am I doing wrong? Am I making things more complicated than they are? How should I approach the problem? Thanks a lot for the help.

    Read the article

  • Java Swing Table questions

    - by Dalton Conley
    Hey guys, working on an event calendar. I'm having some trouble getting my column heads to display.. here is the code private JTable calendarTable; private DefaultTableModel calendarTableModel; final private String [] days = {"Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"}; ////////////////////////////////////////////////////////////////////// /* Setup the actual calendar table */ calendarTableModel = new DefaultTableModel() { public boolean isCellEditable(int row, int col){ return false; } }; // setup columns for(int i = 0; i < 7; i++) calendarTableModel.addColumn(days[i]); calendarTable = new JTable(calendarTableModel); calendarTable.getTableHeader().setResizingAllowed(false); calendarTable.getTableHeader().setReorderingAllowed(false); calendarTable.setColumnSelectionAllowed(true); calendarTable.setRowSelectionAllowed(true); calendarTable.setSelectionMode(ListSelectionModel.SINGLE_SELECTION); calendarTable.setRowHeight(105); calendarTableModel.setColumnCount(7); calendarTableModel.setRowCount(6); Also, Im sort of new with tables.. how can I make the rowHeight split between the max size of the table?

    Read the article

  • find(:all) and then add data from another table to the object

    - by Koning Baard XIV
    I have two tables: create_table "friendships", :force => true do |t| t.integer "user1_id" t.integer "user2_id" t.boolean "hasaccepted" t.datetime "created_at" t.datetime "updated_at" end and create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "phone" t.boolean "gender" t.datetime "created_at" t.datetime "updated_at" t.string "firstname" t.string "lastname" t.date "birthday" end I need to show the user a list of Friendrequests, so I use this method in my controller: def getfriendrequests respond_to do |format| case params[:id] when "to_me" @friendrequests = Friendship.find(:all, :conditions => { :user2_id => session[:user], :hasaccepted => false }) when "from_me" @friendrequests = Friendship.find(:all, :conditions => { :user1_id => session[:user], :hasaccepted => false }) end format.xml { render :xml => @friendrequests } format.json { render :json => @friendrequests } end end I do nearly everything using AJAX, so to fetch the First and Last name of the user with UID user2_id (the to_me param comes later, don't worry right now), I need a for loop which make multiple AJAX calls. This sucks and costs much bandwidth. So I'd rather like that getfriendrequests also returns the First and Last name of the corresponding users, so, e.g. the JSON response would not be: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3 } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4 } } ] but rather: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3, "firstname": "Jon", "lastname": "Skeet" } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4, "firstname": "Mark", "lastname": "Gravell" } } ] I thought of a for loop in the getfriendrequests method, but I don't know how to implement this, and maybe there is an easier way. It must also work for XML. Can anyone help me? Thanks

    Read the article

  • Convert query with system objects from SQL 2000 to 2005/2008

    - by Dan
    I have some SQL I need to get working on SQL 2005/2008. The SQL is from SQL 2000 and uses some system objects to make it work. master.dbo.spt_provider_types master.dbo.syscharsets systypes syscolumns sysobjects I know SQL 2005 does no longer use system tables and I can get the same information from views, but I am looking for a solution that will work for both SQL 2000 and 2005/2008. Any ideas? select top 100 percent TABLE_CATALOG = db_name(), TABLE_SCHEMA = user_name(o.uid), TABLE_NAME = o.name, COLUMN_NAME = c.name, ORDINAL_POSITION = convert(int, ( select count(*) from syscolumns sc where sc.id = c.id AND sc.number = c.number AND sc.colid <= c.colid )), IS_COMPUTED = convert(bit, c.iscomputed) from syscolumns c left join syscomments m on c.cdefault = m.id and m.colid = 1, sysobjects o, master.dbo.spt_provider_types d, systypes t, master.dbo.syscharsets a_cha /* charset/1001, not sortorder.*/ where o.name = @table_name and permissions(o.id, c.name) <> 0 and (o.type in ('U','V','S') OR (o.type in ('TF', 'IF') and c.number = 0)) and o.id = c.id and t.xtype = d.ss_dtype and c.length = case when d.fixlen > 0 then d.fixlen else c.length end and c.xusertype = t.xusertype and a_cha.type = 1001 /* type is charset */ and a_cha.id = isnull(convert(tinyint, CollationPropertyFromID(c.collationid, 'sqlcharset')), convert(tinyint, ServerProperty('sqlcharset'))) -- make sure there's one and only one row selected for each column order by 2, 3, c.colorder ) tbl where IS_COMPUTED = 0

    Read the article

  • Way to automate setting of MergeOptions

    - by Nix
    I am looking for an automated way to iterate over all ObjectQueries and set the merge option to no tracking (read only context). Once i find out how to do it i will be able to generate a default read only context using a T4 template. Is this possible? For example lets say i have these tables in my object context SampleContext TableA TableB TableC I would have to go through and do the below. SampleContext sc = new SampleContext(); sc.TableA.MergeOption = MergeOption.NoTracking; sc.TableB.MergeOption = MergeOption.NoTracking; sc.TableC.MergeOption = MergeOption.NoTracking; I am trying to find a way to generalize this using object context. I want to get it down to something like foreach(var objectQuery : sc){ objectQuery.MergeOption = MergeOption.NoTracking; } Preferably I would like to do it using the baseclass(ObjectContext): ObjectContext baseClass = sc as ObjectContext var objectQueries = sc.MetadataWorkspace.GetItem("Magic Object Query Option); But i am not sure i can even get access to the queries. Any help would be appreciated.

    Read the article

  • Optimizing MySQL update query

    - by Jernej Jerin
    This is currently my MySQL UPDATE query, which is called from program written in Java: String query = "UPDATE maxday SET DatePressureREL = (SELECT Date FROM ws3600 WHERE PressureREL = (SELECT MAX" + "(PressureREL) FROM ws3600 WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1), " + "PressureREL = (SELECT PressureREL FROM ws3600 WHERE PressureREL = (SELECT MAX(PressureREL) FROM ws3600 " + "WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1), ..."; try { s.execute(query); } catch (SQLException e) { System.out.println("SQL error"); } catch(Exception e) { e.printStackTrace(); } Let me explain first, what does it do. I have two tables, first is ws3600, which holds columns (Date, PressureREL, TemperatureOUT, Dewpoint, ...). Then I have second table, called maxday, which holds columns like DatePressureREL, PressureREL, DateTemperatureOUT, TemperatureOUT,... Now as you can see from an example, I update each column, the question is, is there a faster way? I am asking this, because I am calling MAX twice, first to find the Date for that value and secondly to find the actual value. Now I know that I could write like that: SELECT Date, PressureREL FROM ws3600 WHERE PressureREL = (SELECT MAX(PressureREL) FROM ws3600 WHERE Date >= '" + Date + "') AND Date >= '" + Date + "' ORDER BY Date DESC LIMIT 1 That way I get the Date of the max and the max value at the same time and then update with those values the data in maxday table. But the problem of this solution is, that I have to execute many queries, which as I understand takes alot more time compared to executing one long mysql query because of overhead in sending each query to the server. If there is no better way, which solution beetwen this two should I choose. The first, which only takes one query but is very unoptimized or the second which is beter in terms of optimization, but needs alot more queries which probably means that the preformance gain is lost because of overhead in sending each query to the server?

    Read the article

  • Sorting GridView Formed With Data Set

    - by nani
    Following Code is for Sorting GridView Formed With DataSet Source: http://www.highoncoding.com/Articles/176_Sorting_GridView_Manually_.aspx But it is not displaying any output. There is no problem in sql connection. I am unable to trace the error, please help me. Thank You. public partial class _Default : System.Web.UI.Page { private const string ASCENDING = " ASC"; private const string DESCENDING = " DESC"; private DataSet GetData() { SqlConnection cnn = new SqlConnection("Server=localhost;Database=Northwind;Trusted_Connection=True;"); SqlDataAdapter da = new SqlDataAdapter("SELECT TOP 5 firstname,lastname,hiredate FROM EMPLOYEES", cnn); DataSet ds = new DataSet(); da.Fill(ds); return ds; } public SortDirection GridViewSortDirection { get { if (ViewState["sortDirection"] == null) ViewState["sortDirection"] = SortDirection.Ascending; return (SortDirection)ViewState["sortDirection"]; } set { ViewState["sortDirection"] = value; } } protected void GridView1_Sorting(object sender, GridViewSortEventArgs e) { string sortExpression = e.SortExpression; if (GridViewSortDirection == SortDirection.Ascending) { GridViewSortDirection = SortDirection.Descending; SortGridView(sortExpression, DESCENDING); } else { GridViewSortDirection = SortDirection.Ascending; SortGridView(sortExpression, ASCENDING); } } private void SortGridView(string sortExpression, string direction) { // You can cache the DataTable for improving performance DataTable dt = GetData().Tables[0]; DataView dv = new DataView(dt); dv.Sort = sortExpression + direction; GridView1.DataSource = dv; GridView1.DataBind(); } } aspx page asp:GridView ID="GridView1" runat="server" AllowSorting="True" OnSorting="GridView1_Sorting" /asp:GridView

    Read the article

  • iTextSharp Conversion from Table to pdfPTable

    - by Al.
    I have an old ASP.NET project originally done in ASP.NET 1.1 w/ iText.NET and converted to .NET 2.0 and iTextSharp 4.1.6.0. It uses lots of Table (I'm assuming pdfptable wasn't an option at the time it was created.) I am trying to convert this code to use the latest iTextSharp 5.0.0 dll and now see Table and cell have been removed. I started converting it anyway and soon found there is no equivalent to a lot of the functionality that Table offered. Mainly AddCell no longer allows a col,row setting. There are literally thousands of these calls in this code and the posibility of changing it to generate linearly row by row looks hopeless at the moment. The current code looks something like: Dim myTable As New Table(NumReq + 2, IngDS.Tables(0).Rows.Count + 3) myTable.SetWidths(Width) myTable.Width = 100 myTable.Padding = 2 myCell = New Cell(New Phrase("Some Text", New iTextSharp.text.Font(iTextSharp.text.Font.HELVETICA, 8, iTextSharp.text.Font.NORMAL, iTextSharp.text.Color.BLACK))) myCell.SetHorizontalAlignment(Element.ALIGN_RIGHT) myCell.GrayFill = 0.75 myTable.AddCell(myCell, Row, Col) myCell = New Cell(New Phrase("Other Text",New iTextSharp.text.Font(iTextSharp.text.Font.HELVETICA, 8, iTextSharp.text.Font.NORMAL, iTextSharp.text.Color.BLACK))) myCell.GrayFill = 0.75 myTable.AddCell(myCell, Row, Col+1) Before I embark down that road I was hoping someone would be able to point me in a direction that I'm just totally missing that will make this conversion much more simple. Any ideas? Thanks.

    Read the article

  • using jquery growl with php/mysql

    - by jeansymolanza
    on my database i am planning to create a table storing messages to alert users of anything they need to do. i am looking at using a jQuery growl like notification method but im confused at how i would begin building it. the data would be added into the database using the standard MYSQL insert method from a form but how would i select messages from the database to display using the jQuery growl. would this require the use of ajax? this is the javascript code i have so far, i was wondering how i would implement the php code alongside it so that i can pull out data from my tables to display as notifications: <script type="text/javascript"> // In case you don't have firebug... if (!window.console || !console.firebug) { var names = ["log", "debug", "info", "warn", "error", "assert", "dir", "dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace", "profile", "profileEnd"]; window.console = {}; for (var i = 0; i < names.length; ++i) window.console[names[i]] = function() {}; } (function($){ $(document).ready(function(){ // This specifies how many messages can be pooled out at any given time. // If there are more notifications raised then the pool, the others are // placed into queue and rendered after the other have disapeared. $.jGrowl.defaults.pool = 5; var i = 1; var y = 1; setInterval( function() { if ( i < 3 ) { $.jGrowl("Message " + i, { sticky: true, log: function() { console.log("Creating message " + i + "..."); }, beforeOpen: function() { console.log("Rendering message " + y + "..."); y++; } }); } i++; } , 1000 ); }); })(jQuery); </script> <p> thanking you in advance and God bless

    Read the article

  • Multi-table Update(MySQL)

    - by smokinguns
    Hey all, I have a question regarding multi-table update(MySQL). Consider table t1 and t2. The PKEY for t1 is 'tid' which is a foreign Key in t2. There is a field "qtyt2" in t2 which depends on a field called "qtyt1" in table t1. Consider the foll SQL statement: UPDATE t2,t1 SET t2.qtyt2=IF(( t2.qtyt2- t1.qtyt1 )<0,0,( t2.qtyt2- t1.qtyt1 ) ), t1.qtyt1 ="Some value.." where t2.tid="some value.." AND t2.tid=t1.tid In this example qtyt2 depends on qtyt1 for update and the latter itself is updated.Now the result should return 2 if two rows are updated. Is there a guarantee that the fields will be updated in the order in which they appear in the statement( first qtyt2 will be set and then qtyt1).Is it possible that qtyt1 will be set first and then qtyt2? Is the order of tables in the statement important (UPDATE t2, t1 or UPDATE t1,t2)? I found that if I wrote "UPDATE t1,t2" then only t1 would get updated, but on changing the statement to "UPDATE t2,t1" everything worked correctly.

    Read the article

  • Making one table equal to another without a delete *

    - by Joshua Atkins
    Hey, I know this is bit of a strange one but if anyone had any help that would be greatly appreciated. The scenario is that we have a production database at a remote site and a developer database in our local office. Developers make changes directly to the developer db and as part of the deployment process a C# application runs and produces a series of .sql scripts that we can execute on the remote side (essentially delete *, insert) but we are looking for something a bit more elaborate as the downtime from the delete * is unacceptable. This is all reference data that controls menu items, functionality etc of a major website. I have a sproc that essentially returns a diff of two tables. My thinking is that I can insert all the expected data in to a tmp table, execute the diff, and drop anything from the destination table that is not in the source and then upsert everything else. The question is that is there an easy way to do this without using a cursor? To illustrate the sproc returns a recordset structured like this: TableName Col1 Col2 Col3 Dest Src Anything in the recordset with TableName = Dest should be deleted (as it does not exist in src) and anything in Src should be upserted in to dest. I cannot think of a way to do this purely set based but my DB-fu is weak. Any help would be appreciated. Apologies if the explanation is sketchy; let me know if you need anymore details.

    Read the article

  • Extracting table rows with a particular attribute from an HTML File, using HTMLAgilityPack

    - by Soham
    Consider this html snippet: <tr> <td valign=top class="tim_new"><a href="/stocks/company_info/pricechart.php?sc_did=MI42" class="tim_new">3M India</a></td> <td class="tim_new" valign=top><a href='/stocks/marketstats/indcomp.php?optex=NSE&indcode=Diversified' class=tim>Diversified</a></td> <td class="tim_new" align=right valign=top>2,487.25</td> <td class="tim_new" align=right valign=top><font color=#16a903>187.25</font></td> <td class="tim_new" align=right valign=top><font color=#16a903>8.14</font></td> <td class="tim_new" align=right valign=top>2,801.90</td> <td class="tim_new" align=right valign=top>0.06</td> </tr> Realize these three things: The HTML file from which this snippet has been taken, contains multiple number of HTML tables. The table from which this snippet has been extracted doesnt contain only rows of the shown format, but also of other formats like this, for example: <tr><td colspan=7><img src="http://img1.moneycontrol.com/images/blank.gif"height="5"></td></tr>` This same table contains multiple rows of the format which I need to extract. So given this scenario, is it possible to run a code, which extracts, the link with the class name = "tim_new"? Help Appreciated, Soham

    Read the article

  • Building an extension framework for a Rails app

    - by obvio171
    I'm starting research on what I'd need in order to build a user-level plugin system (like Wordpress plugins) for a Rails app, so I'd appreciate some general pointers/advice. By user-level plugin I mean a package a user can extract into a folder and have it show up on an admin interface, allowing them to add some extra configuration and then activate it. What is the best way to go about doing this? Is there any other opensource project that does this already? What does Rails itself already offer for programmer-level plugins that could be leveraged? Any Rails plugins that could help me with this? A plugin would have to be able to: run its own migrations (with this? it's undocumented) have access to my models (plugins already do) have entry points for adding content to views (can be done with content_for and yield) replace entire views or partials (how?) provide its own admin and user-facing views (how?) create its own routes (or maybe just announce its presence and let me create the routes for it, to avoid plugins stepping on each other's toes) Anything else I'm missing? Also, is there a way to limit which tables/actions the plugin has access to concerning migrations and models, and also limit their access to routes (maybe letting them include, but not remove routes)? P.S.: I'll try to keep this updated, compiling stuff I figure out and relevant answers so as to have a sort of guide for others.

    Read the article

  • Mapping composite foreign keys in a many-many relationship in Entity Framework

    - by Kirk Broadhurst
    I have a Page table and a View table. There is a many-many relationship between these two via a PageView table. Unfortunately all of these tables need to have composite keys (for business reasons). Page has a primary key of (PageCode, Version), View has a primary key of (ViewCode, Version). PageView obviously enough has PageCode, ViewCode, and Version. The FK to Page is (PageCode, Version) and the FK to View is (ViewCode, Version) Makes sense and works, but when I try to map this in Entity framework I get Error 3021: Problem in mapping fragments...: Each of the following columns in table PageView is mapped to multiple conceptual side properties: PageView.Version is mapped to (PageView_Association.View.Version, PageView_Association.Page.Version) So clearly enough, EF is having a complain about the Version column being a common component of the two foreign keys. Obviously I could create a PageVersion and ViewVersion column in the join table, but that kind of defeats the point of the constraint, i.e. the Page and View must have the same Version value. Has anyone encountered this, and is there anything I can do get around it? Thanks!

    Read the article

  • How would I go about sharing variables in a class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • Require help in Writing Query

    - by harigm
    The following image have been uploaded to show what I am trying to do and what I wanted out of it Can any one help me write the Query to get the results what I want Please check the following SELECT * FROM KPT WHERE PROPERTY_ID IN (SELECT PROPERTY_ID FROM khata_header WHERE DIV_ID = 3 and RECORD_STATUS = 0) and CHALLAN_NO > 42646 The above is the query I have written and I have got the following result set ID CHALLAN_NO PROPERTY_ID SITE_NO TOTAL_AMOUNT ----- ------------- -------------- ------------------- --------------- 1242 42757 3103010141 296 595 1243 63743 3204190257 483 594 1244 63743 3204190257 483 594 1334 43395 3217010223 1088 576 1421 524210 3320050416 (null) (null) 1422 524210 3320050416 (null) (null) 1560 564355 3320021408 (null) (null) 1870 516292 3320040420 (null) (null) 1940 68357 3217100104 139 1153 1941 68357 3217100104 139 1153 2002 56256 3320100733 511 4430 2003 56256 3320100733 511 4430 2004 66488 3217040869 293 3094 2005 66488 3217040869 293 3094 2016 64571 3217040374 (null) (null) 2036 523122 3320020352 (null) (null) 2039 65682 3217040021 273 919 In my resultset, I am getting the PropertyId repeated, since there are multilple entries, How Can I know How many have been repeated What are those Property Id which have repeated more than 2 times. Little Back ground about the tables are PROPERTY_ID is the FK in the KPT PROPERTY_ID is the PK in KH I am writing a subquery to get the Result, so I am stuck I dont know how to get my results Please help

    Read the article

  • T-SQL Table Joins - Unique Situation

    - by Dimitri
    Hello Everyone. This is my first time encountering the case like this and don't quite know how to handle. Situation: I have one table tblSettingsDefinition, with fields: ID, GroupID, Name, typeID, DefaultValue. Then I have tblSettingtypes with fields TypeID, Name. And I have final table, tblUserSettings with fields SettingID, SettingDefinitionID, UserID, Value. The whole point of this is to have customizable settings. Setting can be defined for a Group or as global setting (if GroupID is NULL). It will have a default value, but if user modifies the setting, an entry is added to tblUserSettings that stores new value. I want to have a query that grabs user settings by first looking at the tblUserSettings, and if it has records for the given user, grabs them, if not retrieves default settings. But the trick is that no matter if user has settings or not, I need to have fields from other two table retrieved to know the setting's Type, Name etc... (which are stored in those other tables). I'm writing query something like this: SELECT * FROM tblSettingDefinition SD LEFT JOIN tblUserSettings US ON SD.SettingID = US.SettingDefinitionID JOIN tblSettingTypes ST ON SD.TypeID=ST.ID WHERE US.UserID=@UserID OR ((SD.GroupID IS NULL) OR (SD.GroupID=(SELECT GroupID FROM tblUser WHERE ID=@UserID))) but it retrieves settings for all users from tblUserSettings instead of just ones that match current @UserID. And if @UserID has no records in tblUserSettings, still, all user settings are retrieved instead of the defaults from tblSettingDefinition. Hope I made myself clear. Any help would be highly appreciated. Thank you.

    Read the article

  • Integer Surrogate Key?

    - by CitadelCSAlum
    I need something real simple, that for some reason I am unable to accomplish to this point. I have also been unable to Google the answer surprisingly. I need to number the entries in different tables uniquely. I am aware of AUTO INCREMENT in MySQL and that it will be involved. My situation would be as follows If I had a table like Table EXAMPLE ID - INTEGER FIRST_NAME - VARCHAR(45) LAST_NAME - VARCHAR(45) PHONE - VARCHAR(45) CITY - VARCHAR(45) STATE - VARCHAR(45) ZIP - VARCHAR(45) This would be the setup for the table where ID is an integer that is auto-incremented every time an entry is inserted into the table. The thing I need is that I do not want to have to account for this field when inserting data into the database. From my understanding this would be a surrogate key, that I can tell the database to automatically increment and I do not have to include it in the INSERT STATEMENT so instead of INSERT INTO EXAMPLE VALUES (2,'JOHN','SMITH',333-333-3333,'NORTH POLE'.... I can leave out the first ID column and just write something like INSERT INTO EXAMPLE VALUES ('JOHN','SMITH'.....etc) Notice I Wouldnt have to define the ID column... I know this is a very common task to do, but for some reason I cant get to the bottom of it. I am using MySQL, just to clarify. Thanks alot

    Read the article

  • Query MySQL data from Excel (or vice-versa)

    - by Charles
    I'm trying to automate a tedious problem. I get large Excel (.xls or .csv, whatever's more convenient) files with lists of people. I want to compare these against my MySQL database.* At the moment I'm exporting MySQL tables and reading them from an Excel spreadsheet. At that point it's not difficult to use =LOOKUP() and such commands to do the work I need, and of course the various text processing I need to do is easy enough to do in Excel. But I can't help but think that this is more work than it needs to be. Is there some way to get at the MySQL data directly from Excel? Alternately, is there a way I could access a reasonably large (~10k records) csv file in a sql script? This seems to be rather basic, but I haven't managed to make it work so far. I found an ODBC connection for MySQL but that doesn't seem to do what I need. In particular, I'm testing whether the name matches or whether any of four email addresses match. I also return information on what matched for the benefit of the next person to use the data, something like "Name 'Bob Smith' not found, but 'Robert Smith' matches on email address robert.smith@foo".

    Read the article

  • Linq to SQL generates StackOverflowException in tight Insert loop

    - by ChrisW
    I'm parsing an XML file and inserting the rows into a table (and related tables) using LinqToSQL. I parse the XML file using LinqToXml into IEnumerable. Then, I create a foreach loop, where I build my LinqToSQL objects and call InsertOnSubmit and SubmitChanges at the end of each loop. Nothing special here. Usually, I make it through around 4,100 records before receiving a StackOverflowException from LinqToSql, right as I call SubmitChanges. It's not always on 4,100... sometimes it's 4102, sometimes, less, etc. I've tried inserting the records that generate the failure individually, but putting them in their own Xml file, but that inserts fine... so it's not the data. I'm running the whole process from an MVC2 app that is uploading the Xml file to the server. I've adjusted my WebRequest timeouts to appropriate values, and again, I'm not getting timeout errors, just StackOverflowExceptions. So is there some pattern that I should follow for times when I have to do many insertions into the database? I never encounter this exception on smaller Xml files, just larger ones.

    Read the article

  • Optimize GROUP BY&ORDER BY query

    - by Jan Hancic
    I have a web page where users upload&watch videos. Last week I asked what is the best way to track video views so that I could display the most viewed videos this week (videos from all dates). Now I need some help optimizing a query with which I get the videos from the database. The relevant tables are this: video (~239371 rows) VID(int), UID(int), title(varchar), status(enum), type(varchar), is_duplicate(enum), is_adult(enum), channel_id(tinyint) signup (~115440 rows) UID(int), username(varchar) videos_views (~359202 rows after 6 days of collecting data, so this table will grow rapidly) videos_id(int), views_date(date), num_of_views(int) The table video holds the videos, signup hodls users and videos_views holds data about video views (each video can have one row per day in that table). I have this query that does the trick, but takes ~10s to execute, and I imagine this will only get worse over time as the videos_views table grows in size. SELECT v.VID, v.title, v.vkey, v.duration, v.addtime, v.UID, v.viewnumber, v.com_num, v.rate, v.THB, s.username, SUM(vvt.num_of_views) AS tmp_num FROM video v LEFT JOIN videos_views vvt ON v.VID = vvt.videos_id LEFT JOIN signup s on v.UID = s.UID WHERE v.status = 'Converted' AND v.type = 'public' AND v.is_duplicate = '0' AND v.is_adult = '0' AND v.channel_id <> 10 AND vvt.views_date >= '2001-05-11' GROUP BY vvt.videos_id ORDER BY tmp_num DESC LIMIT 8 And here is a screenshot of the EXPLAIN result: So, how can I optimize this?

    Read the article

  • Planning management slots/sessions

    - by Glide
    I have a planning structure on two tables to store available slots by day, and sessions. A slot is defined by a range of time in the day. CREATE TABLE slot ( `id` int(11) NOT NULL AUTO_INCREMENT , `date` date , `start` time , `end` time ); Sessions can't overlap themselves and must be wrapped in a slot. CREATE TABLE session ( `id` int(11) NOT NULL AUTO_INCREMENT , `date` date , `start` time , `end` time ); I need to generate a list of available blocks of time of a certain duration, in order to create sessions. Example: INSERT INTO slot (date, start, end) VALUES ("2010-01-01", "10:00", "19:00") , ("2010-01-02", "10:00", "15:00") , ("2010-01-02", "16:00", "20:30") ; INSERT INTO slot (date, start, end) VALUES ("2010-01-01", "10:00", "19:00") , ("2010-01-02", "10:00", "15:00") , ("2010-01-02", "16:00", "20:30") ; 2010-01-01 <##><####> <- Sessions ------------------------------------ <- Slots 10 11 12 13 14 15 16 17 18 19 20 2010-01-02 <##########> <########> <- Sessions -------------------- ------------------ <- Slots 10 11 12 13 14 15 16 17 18 19 20 I need to know which spaces of 1 hour I can use: +------------+-------+-------+ | date | start | end | +------------+-------+-------+ | 2010-01-01 | 13:00 | 14:00 | | 2010-01-01 | 14:00 | 15:00 | | 2010-01-01 | 15:00 | 16:00 | | 2010-01-01 | 16:00 | 17:00 | | 2010-01-01 | 17:00 | 18:00 | | 2010-01-01 | 18:00 | 19:00 | | 2010-01-02 | 10:00 | 11:00 | | 2010-01-02 | 11:00 | 12:00 | | 2010-01-02 | 16:00 | 17:00 | +------------+-------+-------+

    Read the article

  • Newbie T-SQL dynamic stored procedure -- how can I improve it?

    - by Andy Jones
    I'm new to T-SQL; all my experience is in a completely different database environment (Openedge). I've learned enough to write the procedure below -- but also enough to know that I don't know enough! This routine will have to go into a live environment soon, and it works, but I'm quite certain there are a number of c**k-ups and gotchas in it that I know nothing about. The routine copies data from table A to table B, replacing the data in table B. The tables could be in any database. I plan to call this routine multiple times from another stored procedure. Permissions aren't a problem: the routine will be run by the dba as a timed job. Could I have your suggestions as to how to make it fit best-practice? To bullet-proof it? ALTER PROCEDURE [dbo].[copyTable2Table] @sdb varchar(30), @stable varchar(30), @tdb varchar(30), @ttable varchar(30), @raiseerror bit = 1, @debug bit = 0 as begin set nocount on declare @source varchar(65) declare @target varchar(65) declare @dropstmt varchar(100) declare @insstmt varchar(100) declare @ErrMsg nvarchar(4000) declare @ErrSeverity int set @source = '[' + @sdb + '].[dbo].[' + @stable + ']' set @target = '[' + @tdb + '].[dbo].[' + @ttable + ']' set @dropStmt = 'drop table ' + @target set @insStmt = 'select * into ' + @target + ' from ' + @source set @errMsg = '' set @errSeverity = 0 if @debug = 1 print('Drop:' + @dropStmt + ' Insert:' + @insStmt) -- drop the target table, copy the source table to the target begin try begin transaction exec(@dropStmt) exec(@insStmt) commit end try begin catch if @@trancount > 0 rollback select @errMsg = error_message(), @errSeverity = error_severity() end catch -- update the log table insert into HHG_system.dbo.copyaudit (copytime, copyuser, source, target, errmsg, errseverity) values( getdate(), user_name(user_id()), @source, @target, @errMsg, @errSeverity) if @debug = 1 print ( 'Message:' + @errMsg + ' Severity:' + convert(Char, @errSeverity) ) -- handle errors, return value if @errMsg <> '' begin if @raiseError = 1 raiserror(@errMsg, @errSeverity, 1) return 1 end return 0 END Thanks!

    Read the article

  • Is it possible to update old database from dbml file ? (C#, .Net 4, Linq, SQL Server)

    - by Emil
    Hi all, I began recently a new job, a very interesting project (C#,.Net 4, Linq, VS 2010 and SQL Server). And immediately I got a very exciting challenge: I must implement either a new tool or integrate the logic when program start, or whatever, but what must happen is the following: the customers have previous application and database (full with their specific data). Now a new version is ready and the customer gets the update. In the mean time we made some modification on DB (new table, columns, maybe an old column deleted, or whatever). I’m pretty new in Linq and also SQL databases and my first solution can be: I check the applications/databases version and implement all the changes step by step comparing all tables, columns, keys, constrains, etc. (all this new information I have in my dbml and the old I asked from the existing DB). And I’ll do this each time the version changed. But somehow I feel, this is NOT a smart solution so I look for a general solution of this problem. Is there a way to update customers DB from the dbml file? To create a new one is not a problem (CreateDatabase with DataContext), is there any update/alter database methods? I guess I’m not the only one who search for such a solution (I found nothing in internet – or I looked for bad keywords). How did you solve this problem? I look also for an external tool, but first for a solution with C#, Linq or something similar. For any idea thank you in advance! Best regards, Emil

    Read the article

< Previous Page | 280 281 282 283 284 285 286 287 288 289 290 291  | Next Page >