Search Results

Search found 38138 results on 1526 pages for 'non sql'.

Page 610/1526 | < Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >

  • What to do if 2 (or more) relationship tables would have the same name?

    - by primehunter326
    So I know the convention for naming M-M relationship tables in SQL is to have something like so: For tables User and Data the relationship table would be called UserData User_Data or something similar (from here) What happens then if you need to have multiple relationships between User and Data, representing each in its own table? I have a site I'm working on where I have two primary items and multiple independent M-M relationships between them. I know I could just use a single relationship table and have a field which determines the relationship type, but I'm not sure whether this is a good solution. Assuming I don't go that route, what naming convention should I follow to work around my original problem?

    Read the article

  • Help with my application please! Can’t open image(s) with error: External component has thrown an ex

    - by Brandon
    I have an application written in C# I believe and it adds images to a SQL Server 2005 Database. It requires .NET 3.5 to be installed on my computer. I installed .NET 3.5 and setup a database. It runs fine but then once it gets to image 100 when running on one computer, It stops and gives me this error: Can't open image(s) with error: External component has thrown an exception.... When I run the program on my own computer I am able to reach 300 images but then it stops after 300 images and gives me Can't open image(s) with error: External component has thrown an exception.... error once again. please help!

    Read the article

  • Query "where clause" fails when calling a function

    - by guest1
    Hi All, I have a function in Access VBA that takes four parameters.The fourth parameter is a "where clause" that I use in an SQL statement inside the function. The function fails when I include the fourth parameter (where clause). When I remove this fourth parameter, the function just works fine. I am not sure if there is anything wrong with the syntax of the fourth parameter ? Please help. here is the function as called in the Query FunctionA('Table1','Field1',0.3,'Field2=#' & [Field2] & '# and Value3="' & [Value3] & '"') AS Duration_Field

    Read the article

  • Concurrency handling

    - by Lijo
    Hi, Suppose, I am about to start a project using ASP.NET and SQL Server 2005. I have to design the concurrency requirement for this application. I am planning to add a TimeStamp column in each table. While updating the tables I will check that the TimeStamp column is same, as it was selected. Will this approach be suffice? Or is there any shortcomings for this approach under any circumstances? Please advice. Thanks Lijo

    Read the article

  • Why does do this code do if(sz !=sz2) sz = sz2 !?!

    - by acidzombie24
    For the first time i created a linq to sql classes. I decided to look at the class and found this. What... why is it doing if(sz !=sz2) { sz = sz2; }. I dont understand. Why isnt the set generated as this._Property1 = value? private string _Property1; [Column(Storage="_Property1", CanBeNull=false)] public string Property1 { get { return this._Property1; } set { if ((this._Property1 != value)) { this._Property1 = value; } } }

    Read the article

  • SQLite delete the last 25% of records in a database.

    - by Steven smethurst
    I am using a SQLite database to store values from a data logger. The data logger will eventually fills up all the available hard drive space on the computer. I'm looking for a way to remove the last 25% of the logs from the database once it reaches a certain limit. Using the following code: $ret = Query( 'SELECT id as last FROM data ORDER BY id desc LIMIT 1 ;' ); $last_id = $ret[0]['last'] ; $ret = Query( 'SELECT count( * ) as total FROM data' ); $start_id = $last_id - $ret[0]['total'] * 0.75 ; Query( 'DELETE FROM data WHERE id < '. round( $start_id, 0 ) ); A journal file gets created next to the database that fills up the remaining space on the drive until the script fails. How/Can I stop this journal file from being created? Anyway to combined all three SQL queries in to one statement?

    Read the article

  • More CPU cores may not always lead to better performance – MAXDOP and query memory distribution in spotlight

    - by sqlworkshops
    More hardware normally delivers better performance, but there are exceptions where it can hinder performance. Understanding these exceptions and working around it is a major part of SQL Server performance tuning.   When a memory allocating query executes in parallel, SQL Server distributes memory to each task that is executing part of the query in parallel. In our example the sort operator that executes in parallel divides the memory across all tasks assuming even distribution of rows. Common memory allocating queries are that perform Sort and do Hash Match operations like Hash Join or Hash Aggregation or Hash Union.   In reality, how often are column values evenly distributed, think about an example; are employees working for your company distributed evenly across all the Zip codes or mainly concentrated in the headquarters? What happens when you sort result set based on Zip codes? Do all products in the catalog sell equally or are few products hot selling items?   One of my customers tested the below example on a 24 core server with various MAXDOP settings and here are the results:MAXDOP 1: CPU time = 1185 ms, elapsed time = 1188 msMAXDOP 4: CPU time = 1981 ms, elapsed time = 1568 msMAXDOP 8: CPU time = 1918 ms, elapsed time = 1619 msMAXDOP 12: CPU time = 2367 ms, elapsed time = 2258 msMAXDOP 16: CPU time = 2540 ms, elapsed time = 2579 msMAXDOP 20: CPU time = 2470 ms, elapsed time = 2534 msMAXDOP 0: CPU time = 2809 ms, elapsed time = 2721 ms - all 24 cores.In the above test, when the data was evenly distributed, the elapsed time of parallel query was always lower than serial query.   Why does the query get slower and slower with more CPU cores / higher MAXDOP? Maybe you can answer this question after reading the article; let me know: [email protected].   Well you get the point, let’s see an example.   The best way to learn is to practice. To create the below tables and reproduce the behavior, join the mailing list by using this link: www.sqlworkshops.com/ml and I will send you the table creation script.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go   Let’s create the temporary table #FireDrill with all possible Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip from Employees update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --First serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) goThe query took 1011 ms to complete.   The execution plan shows the 77816 KB of memory was granted while the estimated rows were 799624.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1912 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 799624.  The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead. Sort properties shows the rows are unevenly distributed over the 4 threads.   Sort Warnings in SQL Server Profiler.   Intermediate Summary: The reason for the higher duration with parallel plan was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001. Now let’s update the Employees table and distribute employees evenly across all Zip codes.   update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go   The query took 751 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.   Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 661 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 784707.  Sort properties shows the rows are evenly distributed over the 4 threads. No Sort Warnings in SQL Server Profiler.    Intermediate Summary: When employees were distributed unevenly, concentrated on 1 Zip code, parallel sort spilled while serial sort performed well without spilling to tempdb. When the employees were distributed evenly across all Zip codes, parallel sort and serial sort did not spill to tempdb. This shows uneven data distribution may affect the performance of some parallel queries negatively. For detailed discussion of memory allocation, refer to webcasts available at www.sqlworkshops.com/webcasts.     Some of you might conclude from the above execution times that parallel query is not faster even when there is no spill. Below you can see when we are joining limited amount of Zip codes, parallel query will be fasted since it can use Bitmap Filtering.   Let’s update the Employees table with 49 out of 50 employees located in Zip code 2001. update Employees set Zip = EmployeeID / 400 + 1 where EmployeeID % 50 = 1 update Employees set Zip = 2001 where EmployeeID % 50 != 1 go update statistics Employees with fullscan go  Let’s create the temporary table #FireDrill with limited Zip codes. drop table #FireDrill go create table #FireDrill (Zip int primary key) insert into #FireDrill select distinct Zip       from Employees where Zip between 1800 and 2001 update statistics #FireDrill with fullscan go  Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 989 ms to complete.  The execution plan shows the 77816 KB of memory was granted while the estimated rows were 785594. No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 1799 ms to complete.  The execution plan shows the 79360 KB of memory was granted while the estimated rows were 785594.  Sort Warnings in SQL Server Profiler.    The estimated number of rows between serial and parallel plan are the same. The parallel plan has slightly more memory granted due to additional overhead.  Intermediate Summary: The reason for the higher duration with parallel plan even with limited amount of Zip codes was sort spill. This is due to uneven distribution of employees over Zip codes, especially concentration of 49 out of 50 employees in Zip code 2001.   Now let’s update the Employees table and distribute employees evenly across all Zip codes. update Employees set Zip = EmployeeID / 400 + 1 go update statistics Employees with fullscan go Let’s execute the query serially with MAXDOP 1. --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --Serially with MAXDOP 1 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 1) go The query took 250  ms to complete.  The execution plan shows the 9016 KB of memory was granted while the estimated rows were 79973.8.  No Sort Warnings in SQL Server Profiler.  Now let’s execute the query in parallel with MAXDOP 0.  --Example provided by www.sqlworkshops.com --Execute query with uneven Zip code distribution --In parallel with MAXDOP 0 set statistics time on go declare @EmployeeID int, @EmployeeName varchar(48),@zip int select @EmployeeName = e.EmployeeName, @zip = e.Zip from Employees e       inner join #FireDrill fd on (e.Zip = fd.Zip)       order by e.Zip option (maxdop 0) go The query took 85 ms to complete.  The execution plan shows the 13152 KB of memory was granted while the estimated rows were 784707.  No Sort Warnings in SQL Server Profiler.    Here you see, parallel query is much faster than serial query since SQL Server is using Bitmap Filtering to eliminate rows before the hash join.   Parallel queries are very good for performance, but in some cases it can hinder performance. If one identifies the reason for these hindrances, then it is possible to get the best out of parallelism. I covered many aspects of monitoring and tuning parallel queries in webcasts (www.sqlworkshops.com/webcasts) and articles (www.sqlworkshops.com/articles). I suggest you to watch the webcasts and read the articles to better understand how to identify and tune parallel query performance issues.   Summary: One has to avoid sort spill over tempdb and the chances of spills are higher when a query executes in parallel with uneven data distribution. Parallel query brings its own advantage, reduced elapsed time and reduced work with Bitmap Filtering. So it is important to understand how to avoid spills over tempdb and when to execute a query in parallel.   I explain these concepts with detailed examples in my webcasts (www.sqlworkshops.com/webcasts), I recommend you to watch them. The best way to learn is to practice. To create the above tables and reproduce the behavior, join the mailing list at www.sqlworkshops.com/ml and I will send you the relevant SQL Scripts.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   Disclaimer and copyright information:This article refers to organizations and products that may be the trademarks or registered trademarks of their various owners. Copyright of this article belongs to R Meyyappan / www.sqlworkshops.com. You may freely use the ideas and concepts discussed in this article with acknowledgement (www.sqlworkshops.com), but you may not claim any of it as your own work. This article is for informational purposes only; you use any of the suggestions given here entirely at your own risk.   Register for the upcoming 3 Day Level 400 Microsoft SQL Server 2008 and SQL Server 2005 Performance Monitoring & Tuning Hands-on Workshop in London, United Kingdom during March 15-17, 2011, click here to register / Microsoft UK TechNet.These are hands-on workshops with a maximum of 12 participants and not lectures. For consulting engagements click here.   R Meyyappan [email protected] LinkedIn: http://at.linkedin.com/in/rmeyyappan  

    Read the article

  • Application Performance Episode 2: Announcing the Judges!

    - by Michaela Murray
    The story so far… We’re writing a new book for ASP.NET developers, and we want you to be a part of it! If you work with ASP.NET applications, and have top tips, hard-won lessons, or sage advice for avoiding, finding, and fixing performance problems, we want to hear from you! And if your app uses SQL Server, even better – interaction with the database is critical to application performance, so we’re looking for database top tips too. There’s a Microsoft Surface apiece for the person who comes up with the best tip for SQL Server and the best tip for .NET. Of course, if your suggestion is selected for the book, you’ll get full credit, by name, Twitter handle, GitHub repository, or whatever you like. To get involved, just email your nuggets of performance wisdom to [email protected] – there are examples of what we’re looking for and full competition details at Application Performance: The Best of the Web. Enter the judges… As mentioned in my last blogpost, we have a mystery panel of celebrity judges lined up to select the prize-winning performance pointers. We’re now ready to reveal their secret identities! Judging your ASP.NET  tips will be: Jean-Phillippe Gouigoux, MCTS/MCPD Enterprise Architect and MVP Connected System Developer. He’s a board member at French software company MGDIS, and teaches algorithms, security, software tests, and ALM at the Université de Bretagne Sud. Jean-Philippe also lectures at IT conferences and writes articles for programming magazines. His book Practical Performance Profiling is published by Simple-Talk. Nik Molnar,  a New Yorker, ASP Insider, and co-founder of Glimpse, an open source ASP.NET diagnostics and debugging tool. Originally from Florida, Nik specializes in web development, building scalable, client-centric solutions. In his spare time, Nik can be found cooking up a storm in the kitchen, hanging with his wife, speaking at conferences, and working on other open source projects. Mitchel Sellers, Microsoft C# and DotNetNuke MVP. Mitchel is an experienced software architect, business leader, public speaker, and educator. He works with companies across the globe, as CEO of IowaComputerGurus Inc. Mitchel writes technical articles for online and print publications and is the author of Professional DotNetNuke Module Programming. He frequently answers questions on StackOverflow and MSDN and is an active participant in the .NET and DotNetNuke communities. Clive Tong, Software Engineer at Red Gate. In previous roles, Clive spent a lot of time working with Common LISP and enthusing about functional languages, and he’s worked with managed languages since before his first real job (which was a long time ago). Long convinced of the productivity benefits of managed languages, Clive is very interested in getting good runtime performance to keep managed languages practical for real-world development. And our trio of SQL Server specialists, ready to select your top suggestion, are (drumroll): Rodney Landrum, a SQL Server MVP who writes regularly about Integration Services, Analysis Services, and Reporting Services. He’s authored SQL Server Tacklebox, three Reporting Services books, and contributes regularly to SQLServerCentral, SQL Server Magazine, and Simple–Talk. His day job involves overseeing a large SQL Server infrastructure in Orlando. Grant Fritchey, Product Evangelist at Red Gate and SQL Server MVP. In an IT career spanning more than 20 years, Grant has written VB, VB.NET, C#, and Java. He’s been working with SQL Server since version 6.0. Grant volunteers with the Editorial Committee at PASS and has written books for Apress and Simple-Talk. Jonathan Allen, leader and founder of the PASS SQL South West user group. He’s been working with SQL Server since 1999 and enjoys performance tuning, development, and using SQL Server for business solutions. He’s spoken at SQLBits and SQL in the City, as well as local user groups across the UK. He’s also a moderator at ask.sqlservercentral.com.

    Read the article

  • Speaking at Triangle SQL Server User Group 16 Mar 2010!

    - by andyleonard
    I'm excited to present Applied SSIS Design Patterns to the Triangle SQL Server User Group 16 Mar 2010! This is a reprise of my PASS Summit 2009 spotlight session. If you read this blog and make the meeting, introduce yourself! :{> Andy Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Presenting to the New England SQL Server Users Group 10 Jun 2010!

    - by andyleonard
    I am honored to present Applied SSIS Design Patterns to the New England SQL Server Users Group on 10 Jun 2010! This is a reprise of the spotlight session presented at the PASS Summit 2009. Abstract "Design Patterns" is more than a trendy buzz phrase; design patterns are a way of breaking down complex development projects into manageable tasks. They lend themselves to several development methodologies and apply to SSIS development. Chances are you're using your own design patterns now! In this spotlight...(read more)

    Read the article

  • How can I create a generaic ValidationAttribute in C#?

    - by sabbour
    I'm trying to create a UniqueAttribute using the System.ComponentModel.DataAnnotations.ValidationAttribute I want this to be generic as in I could pass the Linq DataContext, the table name, the field and validate if the incoming value is unique or not. Here is a non-compilable code fragment that I'm stuck at right now: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.ComponentModel.DataAnnotations; using System.Data.Linq; using System.ComponentModel; namespace LinkDev.Innovation.Miscellaneous.Validation.Attributes { public class UniqueAttribute : ValidationAttribute { public string Field { get; set; } public override bool IsValid(object value) { string str = (string)value; if (String.IsNullOrEmpty(str)) return true; // this is where I'm stuck return (!Table.Where(entity => entity.Field.Equals(str)).Any()); } } } I should be using this in my model as follows: [Required] [StringLength(10)] [Unique(new DataContext(),"Groups","name")] public string name { get; set; }

    Read the article

  • Linq, Left Join and Dates...

    - by BitFiddler
    So my situation is that I have a linq-to-sql model that does not allow dates to be null in one of my tables. This is intended, because the database does not allow nulls in that field. My problem, is that when I try to write a Linq query with this model, I cannot do a left join with that table anymore because the date is not a 'nullable' field and so I can't compare it to "Nothing". Example: There is a Movie table, {ID,MovieTitle}, and a Showings table, {ID,MovieID,ShowingTime,Location} Now I am trying to write a statement that will return all those movies that have no showings. In T.SQL this would look like: Select m.* From Movies m Left Join Showings s On m.ID = s.MovieID Where s.ShowingTime is Null Now in this situation I could test for Null on the 'Location' field but this is not what I have in reality (just a simplified example). All I have are non-null dates. I am trying to write in Linq: From m In dbContext.Movies _ Group Join s In Showings on m.ID Equals s.MovieID into MovieShowings = Group _ From ms In MovieShowings.DefaultIfEmpty _ Where ms.ShowingTime is Nothing _ Select ms However I am getting an error saying 'Is' operator does not accept operands of type 'Date'. Operands must be reference or nullable types. Is there any way around this? The model is correct, there should never be a null in the Showings:ShowTime table. But if you do a left join, and there are no show times for a particular movie, then ShowTime SHOULD be Nothing for that movie... Thanks everyone for your help.

    Read the article

  • get column names from a table where one of the column name is a key word.

    - by syedsaleemss
    Im using c# .net windows form application. I have created a database which has many tables. In one of the tables I have entered data. In this table I have 4 columns named key, name,age,value. Here the name "key" of the first column is a key word. Now I am trying to get these column names into a combo box. I am unable to get the name "key". It works for "key" when I use this code: private void comboseccolumn_SelectedIndexChanged(object sender, EventArgs e) { string dbname = combodatabase.SelectedItem.ToString(); string path = @"Data Source=" + textBox1.Text + ";Initial Catalog=" + dbname + ";Integrated Security=SSPI"; //string path=@"Data Source=SYED-PC\SQLEXPRESS;Initial Catalog=resources;Integrated Security=SSPI"; SqlConnection con = new SqlConnection(path); string tablename = comboBox2.SelectedItem.ToString(); //string query= "Select * from" +tablename+; //SqlDataAdapter adp = new SqlDataAdapter(" Select [Key] ,value from " + tablename, con); SqlDataAdapter adp = new SqlDataAdapter(" Select [" + combofirstcolumn.SelectedItem.ToString() + "]," + comboseccolumn.SelectedItem.ToString() + "\t from " + tablename, con); DataTable dt = new DataTable(); adp.Fill(dt); dataGridView1.DataSource = dt; } This is beacuse I am using "[" in the select query. But it wont work for non keys. Or if I remove the "[" it is not working for key . Please suggest me so that I can get both key as well as nonkey column names.

    Read the article

  • Practical size limitations for RDBMS

    - by grenade
    I am working on a project that must store very large datasets and associated reference data. I have never come across a project that required tables quite this large. I have proved that at least one development environment cannot cope at the database tier with the processing required by the complex queries against views that the application layer generates (views with multiple inner and outer joins, grouping, summing and averaging against tables with 90 million rows). The RDBMS that I have tested against is DB2 on AIX. The dev environment that failed was loaded with 1/20th of the volume that will be processed in production. I am assured that the production hardware is superior to the dev and staging hardware but I just don't believe that it will cope with the sheer volume of data and complexity of queries. Before the dev environment failed, it was taking in excess of 5 minutes to return a small dataset (several hundred rows) that was produced by a complex query (many joins, lots of grouping, summing and averaging) against the large tables. My gut feeling is that the db architecture must change so that the aggregations currently provided by the views are performed as part of an off-peak batch process. Now for my question. I am assured by people who claim to have experience of this sort of thing (which I do not) that my fears are unfounded. Are they? Can a modern RDBMS (SQL Server 2008, Oracle, DB2) cope with the volume and complexity I have described (given an appropriate amount of hardware) or are we in the realm of technologies like Google's BigTable? I'm hoping for answers from folks who have actually had to work with this sort of volume at a non-theoretical level.

    Read the article

  • Sqlite3 "chained" query

    - by Arrieta
    I need to create a configuration file from a data file that looks as follows: MAN1_TIME '01-JAN-2010 00:00:00.0000 UTC' MAN1_RX 123.45 MAN1_RY 123.45 MAN1_RZ 123.45 MAN1_NEXT 'MAN2' MAN2_TIME '01-MAR-2010 00:00:00.0000 UTC' MAN2_RX 123.45 [...] MAN2_NEXT 'MANX' [...] MANX_TIME [...] This file describes different "legs" of a trajectory. In this case, MAN1 is chained to MAN2, and MAN2 to MANX. In the original file, the chains are not as obvious (i.e., they are non-sequential). I've managed to read the file and store in an Sqlite3 database (I'm using the Python interface). The table is stored with three columns: Id, Par, and Val; for instance, Id='MAN1', Par='RX', and Val='123.45'. I'm interested in querying such database for obtaining the information related to 'n' legs. In English, that would be: "Select RX,RY,RZ for the next five legs starting on MAN1" So the query would go to MAN1, retrieve RX, RY, RZ, then read the parameter NEXT and go to that Id, retrieve RX, RY, RZ; read the parameter NEXT; go to that one ... like this five times. How can I pass such query with "dynamic parameters"? Thank you.

    Read the article

  • Linked Measure Groups and Local Dimensions

    - by ekoner
    Mulling over something I've been reading up on. According to Chris Webb, A linked measure group can only be used with dimensions from the same database as the source measure group. So I took this to mean as long as two cubes share a database, a linked measure group can be used with a dimension. So I created a new cube and added a local measure group, a local dimension and a linked measure group. However, I can't create a relationship between the linked measure group and the local dimension even though they are within the same database. I get the message below: Regular relationships in the current database between non-linked (local) dimensions and linked measure groups cannot be edited. These relationship can only be created through the wizard. This dialog can be used to delete these relationships. I see that I can go to the original cube and add the dimension there, but does the message below mean I have an alternative? I just know it's going to be something simple and trivial! Thanks for reading.

    Read the article

  • Search book by title, and author

    - by Swoosh
    I got a table with columns: author firstname, author lastname, and booktitle Multiple users are inserting in the database, through an import, and I'd like to avoid duplicates. So I'm trying to do something like this: I have a record in the db: First Name: "Isaac" Last Name: "Assimov" Title: "I, Robot" If the user tries to add it again, it would be basically a non-split-text (would not be split up into author firstname, author lastname, and booktitle) So it would basically look like this: "Isaac Asimov - I Robot" or "Asimov, Isaac - I Robot" or "I Robot by Isaac Asimov" You see where I am getting at? (I cannot force the user to split up all the books into into author firstname, author lastname, and booktitle, and I don't even like the idea to force the user, because it's not too user-friendly) What is the best way (in SQL) to compare all this possible bookdata scenarios to what I have in the database, not to add the same book twice. I was thinking about a possibility of suggesting the user: "is THIS the book you are trying to add?" (imagine a list instead of the THIS word, just like on stackoverflow - ask question - Related Questions. I was thinking about soundex and maybe even the like operators, but so far i didn't get the results i was hoping.

    Read the article

  • archiving strategies and limitations of data in a table

    - by Samuel
    Environment: Jboss, Mysql, JPA, Hibernate Our web application will be catering to a large amount of users (~ 1,000,000) and there are a lots of child table where user specific data are stored (e.g. personal, health, forum contributions ...). What would be the best practice to archive user & user specific information. [a] Would it be wise to move the archived user & user specific information to their respective tables within the same database (e.g. user_archive, user_forum_comments_archive ...) OR [b] Would you just mark the database entries with a flag in the original table(s) and just query only non archived entries. We have a unique constraint on User.loginid, how do you handle this requirement if the users are archived via 1-[a] (i.e if a user with loginid 'samuel' gets moved into the archive table and if a new user gets added with the same name in the original table, how would you prevent this. What would be the best strategy to address the unique key constraints. We have a requirement to selectively archive records and bring it back if necessary, will you rely on database tools are would you handle this via your persistence APIs exposed by the JPA entity model.

    Read the article

  • Help on understanding multiple columns on an index?

    - by Xaisoft
    Assume I have a table called "table" and I have 3 columns, a, b, and c. What does it mean to have a non-clustered index on columns a,b? Is a nonclustered index on columns a,b the same as a nonclustered index on columns b,a? (Note the order). Also, Is a nonclustered index on column a the same as a nonclustered index on a,c? I was looking at the website sqlserver performance and they had these dmv scripts where it would tell you if you had overlapping indexes and I believe it was saying that having an index on a is the same as a,b, so it is redundant. Is this true about indexes? One last question is why is the clustered index put on the primary key. Most of the time the primary key is not queried against, so shouldn't the clustered index be on the most queried column. I am probably missing something here like having it on the primary key speeds up joins? Great explanations. Should I turn this into a wiki and change the title index explanation?

    Read the article

  • SQL2008 merge replication fails to update depdendent items when table is added

    - by Dan Puzey
    Setup: an existing SQL2008 merge replication scenario. A large server database, including views and stored procs, being replicated to client machines. What I'm doing: * adding a new table to the database * mark the new table for replication (using SP_AddMergeArticle) * alter a view (which is already part of the replicated content) is updated to include fields from this new table (which is joined to the tables in the existing view). A stored procedure is similarly updated. The problem: the table gets replicated to client machines, but the view is not updated. The stored procedure is also not updated. Non-useful workaround: if I run the snapshot agent after calling SP_AddMergeArticle and before updating the view/SP, both the view and the stored procedure changes correctly replicate to the client. The bigger problem: I'm running a list of database scripts in a transaction, as part of a larger process. The snapshot agent can't be run during a transaction, and if I interrupt the transaction (e.g. by running the scripts in multiple transactions), I lose the ability to roll back the changes should something fail. Does anyone have any suggestions? It seems like I must be missing something obvious, because I don't see why the changes to the view/sproc wouldn't be replicating anyway, regardless of what's going on with the new table.

    Read the article

  • Varchar columns: Nullable or not.

    - by NYSystemsAnalyst
    The database development standards in our organization state the varchar fields should not allow null values. They should have a default value of an empty string (""). I know this makes querying and concatenation easier, but today, one of my coworkers questioned me about why that standard only existed for varchar types an not other datatypes (int, datetime, etc). I would like to know if others consider this to be a valid, defensible standard, or if varchar should be treated the same as fields of other data types? I believe this standard is valid for the following reason: I believe that an empty string and null values, though technically different, are conceptually the same. An empty, zero length string is a string that does not exist. It has no value. However, a numeric value of 0 is not the same as NULL. For example, if a field called OutstandingBalance has a value of 0, it means there are $0.00 remaining. However, if the same field is NULL, that means the value is unknown. On the other hand, a field called CustomerName with a value of "" is basically the same as a value of NULL because both represent the non-existence of the name. I read somewhere that an analogy for an empty string vs. NULL is that of a blank CD vs. no CD. However, I believe this to be a false analogy because a blank CD still phyically exists and still has physical data space that does not have any meaningful data written to it. Basically, I believe a blank CD is the equivalent of a string of blank spaces (" "), not an empty string. Therefore, I believe a string of blank spaces to be an actual value separate from NULL, but an empty string to be the absense of value conceptually equivalent to NULL. Please let me know if my beliefs regarding variable length strings are valid, or please enlighten me if they are not. I have read several blogs / arguments regarding this subject, but still do not see a true conceptual difference between NULLs and empty strings.

    Read the article

  • Same query has nested loops when used with INSERT, but Hash Match without.

    - by AaronLS
    I have two tables, one has about 1500 records and the other has about 300000 child records. About a 1:200 ratio. I stage the parent table to a staging table, SomeParentTable_Staging, and then I stage all of it's child records, but I only want the ones that are related to the records I staged in the parent table. So I use the below query to perform this staging by joining with the parent tables staged data. --Stage child records INSERT INTO [dbo].[SomeChildTable_Staging] ([SomeChildTableId] ,[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 ) SELECT [SomeChildTableId] ,D.[SomeParentTableId] ,SomeData1 ,SomeData2 ,SomeData3 ,SomeData4 FROM [dbo].[SomeChildTable] D INNER JOIN dbo.SomeParentTable_Staging I ON D.SomeParentTableID = I.SomeParentTableID; The execution plan indicates that the tables are being joined with a Nested Loop. When I run just the select portion of the query without the insert, the join is performed with Hash Match. So the select statement is the same, but in the context of an insert it uses the slower nested loop. I have added non-clustered index on the D.SomeParentTableID so that there is an index on both sides of the join. I.SomeParentTableID is a primary key with clustered index. Why does it use a nested loop for inserts that use a join? Is there a way to improve the performance of the join for the insert?

    Read the article

  • How can I perform this query between related tables without using UNION?

    - by jeremy
    Suppose I have two separate tables that I watch to query. Both of these tables has a relation with a third table. How can I query both tables with a single, non UNION based query? I want the result of the search to rank the results by comparing a field on each table. Here's a theoretical example. I have a User table. That User can have both CDs and books. I want to find all of that user's books and CDs with a single query matching a string ("awesome" in this example). A UNION based query might look like this: SELECT "book" AS model, name, ranking FROM book WHERE name LIKE 'Awesome%' UNION SELECT "cd" AS model, name, ranking FROM cd WHERE name LIKE 'Awesome%' ORDER BY ranking DESC How can I perform a query like this without the UNION? If I do a simple left join from User to Books and CDs, we end up with a total number of results equal to the number of matching cds timse the number of matching books. Is there a GROUP BY or some other way of writing the query to fix this?

    Read the article

  • Are there more secure alternatives to the .Net SQLConnection class?

    - by KeyboardMonkey
    Hi SO people, I'm very surprised this issue hasn't been discussed in-depth: This article tells us how to use windbg to dump a running .Net process strings in memory. I spent much time researching the SecureString class, which uses unmanaged pinned memory blocks, and keeps the data encrypted too. Great stuff. The problem comes in when you use it's value, and assign it to the SQLConnection.ConnectionString property, which is of the System.String type. What does this mean? Well... It's stored in plain text Garbage Collection moves it around, leaving copies in memory It can be read with windbg memory dumps That totally negates the SecureString functionality! On top of that, the SQLConnection class is non-inheritable, I can't even roll my own with a SecureString property instead; Yay for closed-source. Yay. A new DAL layer is in progress, but for a new major version and for so many users it will be at least 2 years before every user is upgraded, others might stay on the old version indefinitely, for whatever reason. Because of the frequency the connection is used, marshalling from a SecureString won't help, since the immutable old copies stick in memory until GC comes around. Integrated Windows security isn't an option, since some clients don't work on domains, and other roam and connect over the net. How can I secure the connection string, in memory, so it can't be viewed with windbg?

    Read the article

  • Merge computed data from two tables back into one of them

    - by Tyler McHenry
    I have the following situation (as a reduced example). Two tables, Measures1 and Measures2, each of which store an ID, a Weight in grams, and optionally a Volume in fluid onces. (In reality, Measures1 has a good deal of other data that is irrelevant here) Contents of Measures1: +----+----------+--------+ | ID | Weight | Volume | +----+----------+--------+ | 1 | 100.0000 | NULL | | 2 | 200.0000 | NULL | | 3 | 150.0000 | NULL | | 4 | 325.0000 | NULL | +----+----------+--------+ Contents of Measures2: +----+----------+----------+ | ID | Weight | Volume | +----+----------+----------+ | 1 | 75.0000 | 10.0000 | | 2 | 400.0000 | 64.0000 | | 3 | 100.0000 | 22.0000 | | 4 | 500.0000 | 100.0000 | +----+----------+----------+ These tables describe equivalent weights and volumes of a substance. E.g. 10 fluid ounces of substance 1 weighs 75 grams. The IDs are related: ID 1 in Measures1 is the same substance as ID 1 in Measures2. What I want to do is fill in the NULL volumes in Measures1 using the information in Measures2, but keeping the weights from Measures1 (then, ultimately, I can drop the Measures2 table, as it will be redundant). For the sake of simplicity, assume that all volumes in Measures1 are NULL and all volumes in Measures2 are not. I can compute the volumes I want to fill in with the following query: SELECT Measures1.ID, Measures1.Weight, (Measures2.Volume * (Measures1.Weight / Measures2.Weight)) AS DesiredVolume FROM Measures1 JOIN Measures2 ON Measures1.ID = Measures2.ID; Producing: +----+----------+-----------------+ | ID | Weight | DesiredVolume | +----+----------+-----------------+ | 4 | 325.0000 | 65.000000000000 | | 3 | 150.0000 | 33.000000000000 | | 2 | 200.0000 | 32.000000000000 | | 1 | 100.0000 | 13.333333333333 | +----+----------+-----------------+ But I am at a loss for how to actually insert these computed values into the Measures1 table. Preferably, I would like to be able to do it with a single query, rather than writing a script or stored procedure that iterates through every ID in Measures1. But even then I am worried that this might not be possible because the MySQL documentation says that you can't use a table in an UPDATE query and a SELECT subquery at the same time, and I think any solution would need to do that. I know that one workaround might be to create a new table with the results of the above query (also selecting all of the other non-Volume fields in Measures1) and then drop both tables and replace Measures1 with the newly-created table, but I was wondering if there was any better way to do it that I am missing.

    Read the article

< Previous Page | 606 607 608 609 610 611 612 613 614 615 616 617  | Next Page >