Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 632/1156 | < Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >

  • Continuously checking database from a Windows service

    - by JonF
    I am making a Windows service which needs to continuously check for database entries that can be added at any time to tell it to execute some code. It is looking to see if it's status is set to pending, and it's execute time entry is than the current time. Is the only way to do this to just run select statements over and over? It might need to execute the code every minute which means I need to run the select statement every minute looking for entries in the database. I'm trying to avoid unneccesary cpu time because I'm probably going to end up paying for cpu cycles on the hosting provider

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Extending LINQ classes to my own partial classes in different namespaces?

    - by sah302
    I have a .dbml file which of course contains the auto-generated classes based on my tables. I would however, like to extend them to my own classes. Typically I design such that each of my tables get their own namespace in their own folder containing all of their associated dao and service classes. So if I am dealing with a page that only has to do with 'customers' for instance, I can only include the customerNS. But when using LINQ I seem to be unable to do this. I have tried removing a default namespace from the project, I have tried putting the .dbml file into it's own folder with a custom namespace and then adding a 'using' statement, but no nothing works. I also saw the Entity Namespace, Context Namespace, and Custom Tool Namespace properties associated with the .dbml file and tried setting all these to names x and trying 'using x' in my other class to allow me to extend partial classes, but it just doesn't work. Is this possible or do I have to keep all extended partial classes in the same namespace as the .dbml file?

    Read the article

  • Problem with LINQ in C#

    - by David Bonnici
    I am encountering a problem when using LINQ in C#, I am constantly getting "Specified cast is not valid". This is what I am trying to do. I create a class in which I declare all the columns of the table. [Table(Name="tbl_Aff")] public class Affiliate { [Column] public string name; [Column] public string firstname; [Column] public string surname; [Column] public string title; } I then declare a strongly typed DataContext in which I declare all Table collections as members of the context. public partial class Database : DataContext { public Table affiliate; public Database() : base(Settings.getConnectionString()) { } //This method gets the connection string by reading from an XML file. } public partial class Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Database database = new Database(); try { var q = from a in database.affiliate select a; foreach (var aff in q) // Here I get the error "Specified cast is not valid" { lblMessage.InnerHtml += aff.name + ""; } } catch (Exception ex) { System.Console.WriteLine(ex.Message); } } }

    Read the article

  • ora-00939 error in reporting services, SSRS

    - by san
    Hi, I have an SSRS report , Oracle is my backend and am using this following query for dataset of my second parameter. select distinct X from v_stf_sec_user_staffing_center usc where usc.center_group_id in ( select distinct center_group_id from V_T_STAFFING_CENTER_GROUP scg where INSTR(','||REPLACE(:PI_REGION_LIST,' ')||',', ','||scg.group_abbreviation||',') 0) and usc.nt_user_name=:PI_NT_USER_NAME Here PI_REGION_LIST is a multivalued parameter of string type. and PI_NT_USER_NAME is a default string valued parameter this query works fine when i try to execute in manulally in the Data tab , also in the Oracle tool. But when i run the report in SSRS and select more than 3 values for the parameter PI_REGION_LIST the report throws an error on this dataset, ora-00939 error,too many arguments for function. I am not able to figure out the error here. Please help me with an idea. Thanks in advance, Suni.

    Read the article

  • MSDN about stored procedure default return value

    - by Ilya
    Hello, Could anyone point exactly where MSDN says thet every user stored procedure returns 0 by default if no error happens? In other words, could I be sure that example code given below when being a stored procedure IF someStatement BEGIN RETURN 1 END should always return zero if someStatement is false and no error occurs? I know that it actually works this way, but I failed to find any explicit statement about this from Microsoft.

    Read the article

  • Is it possible to have a tableless select with multiple rows?

    - by outis
    A SELECT without a FROM clause gets us a multiple columns without querying a table: SELECT 17+23, REPLACE('bannanna', 'nn', 'n'), RAND(), CURRENT_TIMESTAMP; How can we write a query that results in multiple rows without referring to a table? Basically, abuse SELECT to turn it into a data definition statement. The result could have a single column or multiple columns. I'm most interested in a DBMS neutral answer, but others (e.g. based on UNPIVOT) are welcome. There's no technique application behind this question; it's more theoretical than practical.

    Read the article

  • Validation L2S question

    - by user158020
    This may be a bit winded because I am new to wpf. I have created a partial class for an entity in my L2S class that is primarily used for validation. It implements the onchanging and onvalidate methods. I am trying to use the MVVM pattern, and in a window/view I have set the datacontext in the xaml: <Window.DataContext> <vm:StartViewModel /> </Window.DataContext> when a user leaves a required field in the view blank, the onchanging event of the partial class is fired when I close the form, not when I save the data. So, if a user leaves the textbox blank, the old value is retained and the onchaging method is fired, but I have no idea how to alert the user of the resulting error. here is my onchanging code in the partial class: partial void Ondocument_titleChanging(string value) { if (value.Length == 0) throw new Exception("Document title is required."); if (value.Length > 256) throw new Exception("Document title cannot be longer than 256 characters."); } throwing an exception doesn't notify the user of the error. it just allows the form to close and rejects the changes to the textbox. hope this makes sense... edit: this example was taken from Scott Guthries article here: http://aspalliance.com/1427_LINQ_to_SQL_Part_5__Binding_UI_using_the_ASPLinqDataSource_Control.5

    Read the article

  • Linq like or other construction

    - by Yauhen Kavalenka
    I have DB oracle in my solution. I want to have some results in this query. Query example: select * from doctor where doctor.name like '%IVANOV_A%'; But if i do it at LINQ i cannot get any result. from p in repository.Doctor.Where(x => x.Name.ToLower().Containsname)) select p; Where 'name' is variable of string parameter. Web layout request next string: "Ivanov a" or "A Ivanov" But i suggest for user choose you pattetn for query. How i can to get "patient by name" if name consist of "First name" and "Last name" but user doesn't know your doctor's full name?

    Read the article

  • How to report DataContext.SubmitChanges() progress with LINQ2SQL

    - by kzen
    If there is a foreach loop that contains DataContext.Customer.InsertOnSubmit(cust) for example: foreach (Object obj in collection) { Customer cust = new Customer { Id = obj.Id, Name = obj.Name ... }; DataContext.Customer.InsertOnSubmit(cust); } And outside of the loop there is a call to: DataContext.SubmittChanges(); Is there a way to obtain the SubmittChanges progress in order to report the progress back to the user (or a different approach without moving the SubmittChanges into the loop)?

    Read the article

  • SYSDATE - 1 error on pl/sql function

    - by ayo
    Hi curtisk/all, I have an issue: when i issue this function below ti gives me the following error: select 'EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'''||name||'''||,OPTIONS=>DBMS_LOGMNR.NEW);' from v\$archived_log where name is not null; select 'EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'''||name||'''||,OPTIONS=>DBMS_LOGMNR.ADDFILE);' from v\$archived_log where name is not null; EXECUTE DBMS_LOGMNR.START_LOGMNR( STARTTIME => SYSDATE - 1, ENDTIME => SYSDATE, OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.CONTINUOUS_MINE + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL); Error: * ERROR at line 1: ORA-01291: missing logfile ORA-06512: at "SYS.DBMS_LOGMNR", line 58 ORA-06512: at line 1 But i have added all the archived logs for several days before and my sysdate is at today. Kindly help out on this issue. thanks. Reagrds Ayo

    Read the article

  • How to manage multiple versions of the same record

    - by Darvis Lombardo
    I am doing short-term contract work for a company that is trying to implement a check-in/check-out type of workflow for their database records. Here's how it should work... 1) A user creates a new entity within the application. There are about 20 related tables that will be populated in addition to the main entity table. 2) Once the entity is created the user will mark it as the master. 3) Another user can make changes to the master only by "checking out" the entity. Multiple users can checkout the entity at the same time. 4) Once the user has made all the necessary changes to the entity, they put it in a "needs approval" status. 5) After an authorized user reviews the entity, they can promote it to master which will put the original record in a tombstoned status. The way they are currently accomplishing the "check out" is by duplicating the entity records in all the tables. The primary keys include EntityID + EntityDate, so they duplicate the entity records in all related tables with the same EntityID and an updated EntityDate and give it a status of "checked out". When the record is put into the next state (needs approval), the duplication occurs again. Eventually it will be promoted to master at which time the final record is marked as master and the original master is marked as dead. This design seems hideous to me, but I understand why they've done it. When someone looks up an entity from within the application, they need to see all current versions of that entity. This was a very straightforward way for making that happen. But the fact that they are representing the same entity multiple times within the same table(s) doesn't sit well with me, nor does the fact that they are duplicating EVERY piece of data rather than only storing deltas. I would be interested in hearing your reaction to the design, whether positive or negative. I would also be grateful for any resoures you can point me to that might be useful for seeing how someone else has implemented such a mechanism. Thanks! Darvis

    Read the article

  • MySQL - are FK's useful / viable in a web app?

    - by yoda
    Hi all, I've encountered this discussion related to FK's and web applications. Basically some people say that FK's in web applications doesn't represent a real improvement and can even make the application slower in some cases. What do you guys think, what's your experience? -- A quote from Heikki Tuuri, creator of InnoDB engine, founder and CEO of Innobase: InnoDB checks foreign keys as soon as a row is updated, no batching is performed or checks delayed till transaction commit Foreign keys are often serious performance overhead, but help maintain data consistency Foreign Keys increase amount of row level locking done and can make it spread to a lot of tables besides the ones directly updated

    Read the article

  • LINQ - Contains with anonymous type

    - by Marlos
    When using this code (simplified for asking): var rows1 = (from t1 in db.TABLE1 where (t1.COLUMN_A == 1) select new { t1.COLUMN_B, t1.COLUMN_C }); var rows2 = (from t2 in db.TABLE2 where (rows1.Contains(t2.COLUMN_A)) select t2; I got the following error: The type arguments for method 'System.Linq.Enumerable.Contains(System.Collections.Generic.IEnumerable, TSource)' cannot be inferred from the usage. Try specifying the type arguments explicitly. I need to filter the first result by COLUMN_B, but I don't know how. Is there a way to filter it?

    Read the article

  • Is closing/disposing an SqlDataReader needed if you are already closing the sqlconnection?

    - by Brian
    I noticed This question, but my question is a bit more specific. Is there any advantage to using using (SqlConnection conn = new SqlConnection(conStr)) { using (SqlCommand command = new SqlCommand()) { // dostuff } } instead of using (SqlConnection conn = new SqlConnection(conStr)) { SqlCommand command = new SqlCommand(); // dostuff } Obviously it does matter run more than one command with the same connection, since closing an SqlDataReader is more efficient than closing and reopening a connection (calling conn.Close();conn.Open(); will also free up the connection). I see many people insist that failure to close the DataReader means leaving open connection resources around, but doesn't that only apply if you don't close the connection?

    Read the article

  • Databinding multiple tables linq query to gridview?

    - by Curtis White
    My question is how can I display a linq query in a gridview that has data from multiple tables AND allow the user to edit some of the fields or delete the data from a single table? I'd like to do this with either a linqdatasource or a linq query. I'm aware I can set the e.Result to the query on the selecting event. I've also been able to build a custom databound control for displaying the linq relations (parent.child). However, I'm not sure how I can make this work with delete? I'm thinking I may need to handle the delete event with custom code.

    Read the article

  • How to avoid timestamp issue in a long query?

    - by pingi
    Hi, I have the following 2 tables: items: id int primary key bla text events: id_items int num int when timestamp without time zone ble text composite primary key: id_items, num and want to select to each item the most recent event (the newest 'when'). I wrote an request, but I don't know if it could be written more efficiently. Also on PostgreSQL there is a issue with comparing Timestamp objects: 2010-05-08T10:00:00.123 == 2010-05-08T10:00:00.321 so I select with 'MAX(num)' Any thoughts how to make it better? Thanks. SELECT i.*, ea.* FROM items AS i JOIN ( SELECT t.s AS t_s, t.c AS t_c, max(e.num) AS o FROM events AS e JOIN ( SELECT DISTINCT id_item AS s, MAX(when) AS c FROM events GROUP BY s ORDER BY c ) AS t ON t.s = e.id_item AND e.when = t.c GROUP BY t.s, t.c ) AS tt ON tt.t_s = i.id JOIN events AS ea ON ea.id_item = tt.t_s AND ea.cas = tt.t_c AND ea.num = tt.o;

    Read the article

  • Index on column with only 2 distinct values

    - by Will
    I am wondering about the performance of this index: I have an "Invalid" varchar(1) column that has 2 values: NULL or 'Y' I have an index on (invalid), as well as (invalid, last_validated) Last_validated is a datetime (this is used for a unrelated SELECT query) I am flagging a small amount of items (1-5%) of rows in the table with this as 'to be deleted'. This is so when i DELETE FROM items WHERE invalid='Y' it does not perform a full table scan for the invalid items. A problem seems to be, the actual DELETE is quite slow now, possibly because all the indexes are being removed as they are deleted. Would a bitmap index provide better performance for this? or perhaps no index at all?

    Read the article

  • MYSQL join - reference external field from nested select?

    - by PHP thinker
    Is it allowed to reference external field from nested select? E.g. SELECT FROM ext1 LEFT JOIN (SELECT * FROM int2 WHERE int2.id = ext1.some_id ) as x ON 1=1 in this case, this is referencing ext1.some_id in nested select. I am getting errors in this case that field ext1.some_id is unknow. Is it possible? Is there some other way?

    Read the article

  • Storing i18n data in a database using XML

    - by TigrouMeow
    Hello, I may have to store some i18n-ed data in my database using XML if I don't fight back. That's not my choice, but it's in the specifications I have to follow. We would have, by example, something like following in a 'Country' column: <lang='fr'>Etats-Unis</lang> <lang='en'>United States</lang> This would apply to many columns in the database. I don't think it's a good idea at all. I tend to think that a cell in a database should represent a single piece of data (better for look-up), and that the database should have two dimensions maximum and not 3 or more (one request more would be required per dimension / a dimension here would be equal to the number of XML attributes). My idea was to have a separate table for all the translations, with columns such as : ID / Language / Translation. However, I should admit that I'm really not sure what is the best way to store data in various languages in a DB... Thanks for your advices :)

    Read the article

  • SMO some times doesn't display the instances in sql2008 cluster

    - by Cute
    Hi I have used SMO API.in that i have used SmoApplication.EnumAvailableServers(FALSE) and from that i have filtered local instances i have used this approch insted of true to make this as convinent for remote sqldiscovery also.using that api created a dll and use that dll in c++. Now this is working in all combinations but some times it is failed to retrieve the instannces in win2008 sql2008 cluster combination. if i run the exe for 5 times it got succeed for 3 times and failed for two times... What is wromg with win-sql2008 cluster .is there any additional changes needed to make it work properrly.My firewall is off and also added exception for tcp port 1433. Anyy help is greately Appreciated... Thanks in Advance.

    Read the article

  • Database layout tagging system

    - by Kurresmack
    I am creating a web site for a customer and they want to be able to create articles. My idea is to tag them so I am going to implement the system. What is the best design, both from an architectural and a perfomance perspective: 1. To have table with all tags and then have a one to many relationship table that links a tag like this: articles table with ID tags table with ID one to many table with columns Article.ID and Tags.ID 2. To have one table with articles and one with tags for articles like this: articles table with ID tags table with Article.ID and tag text Thanks in advance!

    Read the article

< Previous Page | 628 629 630 631 632 633 634 635 636 637 638 639  | Next Page >