Search Results

Search found 28052 results on 1123 pages for 't sql tuesday'.

Page 613/1123 | < Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >

  • Linq like or other construction

    - by Yauhen Kavalenka
    I have DB oracle in my solution. I want to have some results in this query. Query example: select * from doctor where doctor.name like '%IVANOV_A%'; But if i do it at LINQ i cannot get any result. from p in repository.Doctor.Where(x => x.Name.ToLower().Containsname)) select p; Where 'name' is variable of string parameter. Web layout request next string: "Ivanov a" or "A Ivanov" But i suggest for user choose you pattetn for query. How i can to get "patient by name" if name consist of "First name" and "Last name" but user doesn't know your doctor's full name?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • T-SQL: Build Nested Set From Parent-Child Relationship

    - by Peder Rice
    I have a table that stores my Customer hierarchy with a nested set (due to the specific design of the application, I wasn't able to leverage just a Customer/Parent Customer mapping table). To simplify maintenance of this table, I've built a couple of stored procedures to handle moving nodes around and creating new nodes, but it's significantly more work than maintaining a Customer/Parent Customer table. Further, these structures are very fragile. So I'm looking for a way to have a Customer/Parent Customer table and then convert that table to a nested set on demand. Does anyone have a link to such an implementation?

    Read the article

  • Is closing/disposing an SqlDataReader needed if you are already closing the sqlconnection?

    - by Brian
    I noticed This question, but my question is a bit more specific. Is there any advantage to using using (SqlConnection conn = new SqlConnection(conStr)) { using (SqlCommand command = new SqlCommand()) { // dostuff } } instead of using (SqlConnection conn = new SqlConnection(conStr)) { SqlCommand command = new SqlCommand(); // dostuff } Obviously it does matter run more than one command with the same connection, since closing an SqlDataReader is more efficient than closing and reopening a connection (calling conn.Close();conn.Open(); will also free up the connection). I see many people insist that failure to close the DataReader means leaving open connection resources around, but doesn't that only apply if you don't close the connection?

    Read the article

  • How to report DataContext.SubmitChanges() progress with LINQ2SQL

    - by kzen
    If there is a foreach loop that contains DataContext.Customer.InsertOnSubmit(cust) for example: foreach (Object obj in collection) { Customer cust = new Customer { Id = obj.Id, Name = obj.Name ... }; DataContext.Customer.InsertOnSubmit(cust); } And outside of the loop there is a call to: DataContext.SubmittChanges(); Is there a way to obtain the SubmittChanges progress in order to report the progress back to the user (or a different approach without moving the SubmittChanges into the loop)?

    Read the article

  • MSDN about stored procedure default return value

    - by Ilya
    Hello, Could anyone point exactly where MSDN says thet every user stored procedure returns 0 by default if no error happens? In other words, could I be sure that example code given below when being a stored procedure IF someStatement BEGIN RETURN 1 END should always return zero if someStatement is false and no error occurs? I know that it actually works this way, but I failed to find any explicit statement about this from Microsoft.

    Read the article

  • SYSDATE - 1 error on pl/sql function

    - by ayo
    Hi curtisk/all, I have an issue: when i issue this function below ti gives me the following error: select 'EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'''||name||'''||,OPTIONS=>DBMS_LOGMNR.NEW);' from v\$archived_log where name is not null; select 'EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'''||name||'''||,OPTIONS=>DBMS_LOGMNR.ADDFILE);' from v\$archived_log where name is not null; EXECUTE DBMS_LOGMNR.START_LOGMNR( STARTTIME => SYSDATE - 1, ENDTIME => SYSDATE, OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL); Error: * ERROR at line 1: ORA-01291: missing logfile ORA-06512: at "SYS.DBMS_LOGMNR", line 58 ORA-06512: at line 1 But i have added all the archived logs for several days before and my sysdate is at today. Kindly help out on this issue. thanks. Reagrds Ayo

    Read the article

  • Validation L2S question

    - by user158020
    This may be a bit winded because I am new to wpf. I have created a partial class for an entity in my L2S class that is primarily used for validation. It implements the onchanging and onvalidate methods. I am trying to use the MVVM pattern, and in a window/view I have set the datacontext in the xaml: <Window.DataContext> <vm:StartViewModel /> </Window.DataContext> when a user leaves a required field in the view blank, the onchanging event of the partial class is fired when I close the form, not when I save the data. So, if a user leaves the textbox blank, the old value is retained and the onchaging method is fired, but I have no idea how to alert the user of the resulting error. here is my onchanging code in the partial class: partial void Ondocument_titleChanging(string value) { if (value.Length == 0) throw new Exception("Document title is required."); if (value.Length > 256) throw new Exception("Document title cannot be longer than 256 characters."); } throwing an exception doesn't notify the user of the error. it just allows the form to close and rejects the changes to the textbox. hope this makes sense... edit: this example was taken from Scott Guthries article here: http://aspalliance.com/1427_LINQ_to_SQL_Part_5__Binding_UI_using_the_ASPLinqDataSource_Control.5

    Read the article

  • Databinding multiple tables linq query to gridview?

    - by Curtis White
    My question is how can I display a linq query in a gridview that has data from multiple tables AND allow the user to edit some of the fields or delete the data from a single table? I'd like to do this with either a linqdatasource or a linq query. I'm aware I can set the e.Result to the query on the selecting event. I've also been able to build a custom databound control for displaying the linq relations (parent.child). However, I'm not sure how I can make this work with delete? I'm thinking I may need to handle the delete event with custom code.

    Read the article

  • SMO some times doesn't display the instances in sql2008 cluster

    - by Cute
    Hi I have used SMO API.in that i have used SmoApplication.EnumAvailableServers(FALSE) and from that i have filtered local instances i have used this approch insted of true to make this as convinent for remote sqldiscovery also.using that api created a dll and use that dll in c++. Now this is working in all combinations but some times it is failed to retrieve the instannces in win2008 sql2008 cluster combination. if i run the exe for 5 times it got succeed for 3 times and failed for two times... What is wromg with win-sql2008 cluster .is there any additional changes needed to make it work properrly.My firewall is off and also added exception for tcp port 1433. Anyy help is greately Appreciated... Thanks in Advance.

    Read the article

  • MySQL - are FK's useful / viable in a web app?

    - by yoda
    Hi all, I've encountered this discussion related to FK's and web applications. Basically some people say that FK's in web applications doesn't represent a real improvement and can even make the application slower in some cases. What do you guys think, what's your experience? -- A quote from Heikki Tuuri, creator of InnoDB engine, founder and CEO of Innobase: InnoDB checks foreign keys as soon as a row is updated, no batching is performed or checks delayed till transaction commit Foreign keys are often serious performance overhead, but help maintain data consistency Foreign Keys increase amount of row level locking done and can make it spread to a lot of tables besides the ones directly updated

    Read the article

  • C# Select clause returns system exception instead of relevant object

    - by Kashif
    I am trying to use the select clause to pick out an object which matches a specified name field from a database query as follows: objectQuery = from obj in objectList where obj.Equals(objectName) select obj; In the results view of my query, I get: base {System.SystemException} = {"Boolean Equals(System.Object)"} Where I should be expecting something like a Car, Make, or Model Would someone please explain what I am doing wrong here? The method in question can be seen here: // this function searches the database's table for a single object that matches the 'Name' property with 'objectName' public static T Read<T>(string objectName) where T : IEquatable<T> { using (ISession session = NHibernateHelper.OpenSession()) { IQueryable<T> objectList = session.Query<T>(); // pull (query) all the objects from the table in the database int count = objectList.Count(); // return the number of objects in the table // alternative: int count = makeList.Count<T>(); IQueryable<T> objectQuery = null; // create a reference for our queryable list of objects T foundObject = default(T); // create an object reference for our found object if (count > 0) { // give me all objects that have a name that matches 'objectName' and store them in 'objectQuery' objectQuery = from obj in objectList where obj.Equals(objectName) select obj; // make sure that 'objectQuery' has only one object in it try { foundObject = (T)objectQuery.Single(); } catch { return default(T); } // output some information to the console (output screen) Console.WriteLine("Read Make: " + foundObject.ToString()); } // pass the reference of the found object on to whoever asked for it return foundObject; } } Note that I am using the interface "IQuatable<T>" in my method descriptor. An example of the classes I am trying to pull from the database is: public class Make: IEquatable<Make> { public virtual int Id { get; set; } public virtual string Name { get; set; } public virtual IList<Model> Models { get; set; } public Make() { // this public no-argument constructor is required for NHibernate } public Make(string makeName) { this.Name = makeName; } public override string ToString() { return Name; } // Implementation of IEquatable<T> interface public virtual bool Equals(Make make) { if (this.Id == make.Id) { return true; } else { return false; } } // Implementation of IEquatable<T> interface public virtual bool Equals(String name) { if (this.Name.Equals(name)) { return true; } else { return false; } } } And the interface is described simply as: public interface IEquatable<T> { bool Equals(T obj); }

    Read the article

  • LINQ - Contains with anonymous type

    - by Marlos
    When using this code (simplified for asking): var rows1 = (from t1 in db.TABLE1 where (t1.COLUMN_A == 1) select new { t1.COLUMN_B, t1.COLUMN_C }); var rows2 = (from t2 in db.TABLE2 where (rows1.Contains(t2.COLUMN_A)) select t2; I got the following error: The type arguments for method 'System.Linq.Enumerable.Contains(System.Collections.Generic.IEnumerable, TSource)' cannot be inferred from the usage. Try specifying the type arguments explicitly. I need to filter the first result by COLUMN_B, but I don't know how. Is there a way to filter it?

    Read the article

  • Linq is returning too many results when joined

    - by KallDrexx
    In my schema I have two database tables. relationships and relationship_memberships. I am attempting to retrieve all the entries from the relationship table that have a specific member in it, thus having to join it with the relationship_memberships table. I have the following method in my business object: public IList<DBMappings.relationships> GetRelationshipsByObjectId(int objId) { var results = from r in _context.Repository<DBMappings.relationships>() join m in _context.Repository<DBMappings.relationship_memberships>() on r.rel_id equals m.rel_id where m.obj_id == objId select r; return results.ToList<DBMappings.relationships>(); } _Context is my generic repository using code based on the code outlined here. The problem is I have 3 records in the relationships table, and 3 records in the memberships table, each membership tied to a different relationship. 2 membership records have an obj_id value of 2 and the other is 3. I am trying to retrieve a list of all relationships related to object #2. When this linq runs, _context.Repository<DBMappings.relationships>() returns the correct 3 records and _context.Repository<DBMappings.relationship_memberships>() returns 3 records. However, when the results.ToList() executes, the resulting list has 2 issues: 1) The resulting list contains 6 records, all of type DBMappings.relationships(). Upon further inspection there are 2 for each real relationship record, both are an exact copy of each other. 2) All relationships are returned, even if m.obj_id == 3, even though objId variable is correctly passed in as 2. Can anyone see what's going on because I've spent 2 days looking at this code and I am unable to understand what is wrong. I have joins in other linq queries that seem to be working great, and my unit tests show that they are still working, so I must be doing something wrong with this. It seems like I need an extra pair of eyes on this one :)

    Read the article

  • Active Record/ORM vs Normal Forms?

    - by Arsenal
    Hello, I've been playing around with Active Record a bit, and I have noticed that A.C./ORM always uses the following database model when creating a one-to-one relationship Person id | country_id | name | ... Country id | tld | name | ... No I wondered, isn't this a violiation of the third Normal Form? This clearly states "Every non-prime attribute is non-transitively dependent on every key of the table". Well this country_id isn't dependent of personid is it? So is this wrong or am I just not getting the point?

    Read the article

  • Help with SQL Query

    - by djfrear
    With regards to the following statement: Select * From explorer.booking_record booking_record_ Inner Join explorer.client client_ On booking_record_.labelno = client_.labelno Inner Join explorer.tour_hotel tour_hotel_ On tour_hotel_.tourcode = booking_record_.tourrefcode Inner Join explorer.hotelrecord hotelrecord_ On tour_hotel_.hotelcode = hotelrecord_.hotelref Where booking_record_.bookingdate Not Like '0000-00-00' And booking_record_.tourdeparturedate Not Like '0000-00-00' And hotelrecord_.hotelgroup = "LPL" And Year(booking_record_.tourdeparturedate) Between Year(AddDate(Now(), Interval -5 Year)) And Year(Now()) My MySQL skills are certainly not up to scratch, the actual result set I wish to find is "a customer who has been to 5 or more LPL hotels in the past 5 years". So far I havent got as far as dealing with the count as I'm getting a huge number of results with some 250+ per customer. I assume this is to do with the way I'm joining tables. Schema wise the booking_record table contains a tour reference code, which links to tour_hotel which then contains a hotelcode which links to hotelrecord. This hotelrecord table contains the hotelgroup. The client table is joined to the booking_record via a booking reference and a client may have many bookings. If anyone could suggest a way for me to do this I'd be very grateful and hopefully learn enough to do it myself next time! I've been scratching my head over this one for a few hours now! Customers may have many bookings within booking_record Daniel.

    Read the article

  • How to avoid timestamp issue in a long query?

    - by pingi
    Hi, I have the following 2 tables: items: id int primary key bla text events: id_items int num int when timestamp without time zone ble text composite primary key: id_items, num and want to select to each item the most recent event (the newest 'when'). I wrote an request, but I don't know if it could be written more efficiently. Also on PostgreSQL there is a issue with comparing Timestamp objects: 2010-05-08T10:00:00.123 == 2010-05-08T10:00:00.321 so I select with 'MAX(num)' Any thoughts how to make it better? Thanks. SELECT i.*, ea.* FROM items AS i JOIN ( SELECT t.s AS t_s, t.c AS t_c, max(e.num) AS o FROM events AS e JOIN ( SELECT DISTINCT id_item AS s, MAX(when) AS c FROM events GROUP BY s ORDER BY c ) AS t ON t.s = e.id_item AND e.when = t.c GROUP BY t.s, t.c ) AS tt ON tt.t_s = i.id JOIN events AS ea ON ea.id_item = tt.t_s AND ea.cas = tt.t_c AND ea.num = tt.o;

    Read the article

  • Storing i18n data in a database using XML

    - by TigrouMeow
    Hello, I may have to store some i18n-ed data in my database using XML if I don't fight back. That's not my choice, but it's in the specifications I have to follow. We would have, by example, something like following in a 'Country' column: <lang='fr'>Etats-Unis</lang> <lang='en'>United States</lang> This would apply to many columns in the database. I don't think it's a good idea at all. I tend to think that a cell in a database should represent a single piece of data (better for look-up), and that the database should have two dimensions maximum and not 3 or more (one request more would be required per dimension / a dimension here would be equal to the number of XML attributes). My idea was to have a separate table for all the translations, with columns such as : ID / Language / Translation. However, I should admit that I'm really not sure what is the best way to store data in various languages in a DB... Thanks for your advices :)

    Read the article

  • transforming from 'Y' or 'N' to bit

    - by rap-uvic
    Hello, I have a table which has a column called Direct of type char(1). It's values are either 'Y' or 'N' or NULL. I am creating a view and I want the value to be transformed to either 0 or 1 of type bit. Right now it's of type INT. How do I go about doing this? Following is the code: CASE WHEN Direct = 'Y' THEN (SELECT 1) WHEN Direct <> 'Y' THEN (SELECT 0) END AS DirectDebit

    Read the article

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • Precision of Interval for PL/SQL Function value

    - by Gary
    Generally, when you specify a function the scale/precision/size of the return datatype is undefined. For example, you say FUNCTION show_price RETURN NUMBER or FUNCTION show_name RETURN VARCHAR2. You are not allowed to have FUNCTION show_price RETURN NUMBER(10,2) or FUNCTION show_name RETURN VARCHAR2(20), and the function return value is unrestricted. This is documented functionality. Now, I get an precision error (ORA-01873) if I push 9999 hours (about 400 days) into the following. The limit is because the default days precision is 2 DECLARE v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return INTERVAL DAY TO SECOND IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / and it won't allow the precision to be specified directly as part of the datatype returned by the function. DECLARE v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return INTERVAL DAY (4) TO SECOND IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / I can use a SUBTYPE DECLARE subtype t_int is INTERVAL DAY (4) TO SECOND(0); v_int INTERVAL DAY (4) TO SECOND(0); FUNCTION hhmm_to_interval return t_int IS v_hhmm INTERVAL DAY (4) TO SECOND(0); BEGIN v_hhmm := to_dsinterval('PT9999H'); RETURN v_hhmm; -- END hhmm_to_interval; BEGIN v_int := hhmm_to_interval; end; / Any drawbacks to the subtype approach ? Any alternatives (eg some place to change a default precision) ? Working with 10gR2.

    Read the article

< Previous Page | 609 610 611 612 613 614 615 616 617 618 619 620  | Next Page >