Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 355/398 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • .net printing multiple reports in one document (architecture question)

    - by LawsonM
    I understand how to print a single document via the PrintDocument class. However, I want to print multiple reports in one document. Each "report" will consist of charts, tables, etc. I want to have two reports per page. I've studied the few examples I can find on how to combine multiple documents into one; however, they always seem to work by creating a collection of objects (e.g. customer or order) that are then iterated over and drawn in the OnPrintPage method. My problem and hence the "architecture" question is that I don't want to cache the objects required to produce the report since they are very large and memory intensive. I'd simply like the resulting "report". One thought I had was to print the report to a metafile, cache that instead in a "MultiplePrintDocument" class and then position those images appropriately two to a page in the OnPrintPage method. I think this would be a lot more efficient and scalable in my case. But I'm not a professional programmer and can't figure out if I'm barking up the wrong tree here. I think the Graphics.BeginContainer and Graphics.Save methods might be relevant, but can't figure out how to implement or if there is a better way. Any pointers would be greatly appreciated.

    Read the article

  • EFv1 mapping 1 to many Relationship to POCOs

    - by Scott
    I'm trying to work through a problem where I'm mapping EF Entities to POCO which serve as DTO. I have two tables within my database, say Products and Categories. A Product belongs to one category and one category may contain many Products. My EF entities are named efProduct and efCategory. Within each entity there is the proper Navigation Property between efProduct and efCategory. My Poco objects are simple public class Product { public string Name { get; set; } public int ID { get; set; } public double Price { get; set; } public Category ProductType { get; set; } } public class Category { public int ID { get; set; } public string Name { get; set; } public List<Product> products { get; set; } } To get a list of products I am able to do something like public IQueryable<Product> GetProducts() { return from p in ctx.Products select new Product { ID = p.ID, Name = p.Name, Price = p.Price ProductType = p.Category }; } However there is a type mismatch error because p.Category is of type efCategory. How can I resolve this? That is, how can I convert p.Category to type Category? I know in .NET EF has added support for POCO, but I'm forced to use .NET 3.5 SP1.

    Read the article

  • Computation overhead in C# - Using getters/setters vs. modifying arrays directly and casting speeds

    - by Jeffrey Kern
    I was going to write a long-winded post, but I'll boil it down here: I'm trying to emulate the graphical old-school style of the NES via XNA. However, my FPS is SLOW, trying to modify 65K pixels per frame. If I just loop through all 65K pixels and set them to some arbitrary color, I get 64FPS. The code I made to look-up what colors should be placed where, I get 1FPS. I think it is because of my object-orented code. Right now, I have things divided into about six classes, with getters/setters. I'm guessing that I'm at least calling 360K getters per frame, which I think is a lot of overhead. Each class contains either/and-or 1D or 2D arrays containing custom enumerations, int, Color, or Vector2D, bytes. What if I combined all of the classes into just one, and accessed the contents of each array directly? The code would look a mess, and ditch the concepts of object-oriented coding, but the speed might be much faster. I'm also not concerned about access violations, as any attempts to get/set the data in the arrays will done in blocks. E.g., all writing to arrays will take place before any data is accessed from them. As for casting, I stated that I'm using custom enumerations, int, Color, and Vector2D, bytes. Which data types are fastest to use and access in the .net Framework, XNA, XBox, C#? I think that constant casting might be a cause of slowdown here. Also, instead of using math to figure out which indexes data should be placed in, I've used precomputed lookup tables so I don't have to use constant multiplication, addition, subtraction, division per frame. :)

    Read the article

  • Slow Response in checkbox using JQuery

    - by Dean
    Hi to all. I'm trying to optimize a website. The flow is i tried to query a certain table and all of its data entry in a page(w/ toolbars), work fine. When i tried to edit the page the problem is when i click the checkbox button i have to wait 2-5sec just by clicking it. I limit the viewing of entry to 5 only but the response on checkbox doesn't change. The tables have 100 entries in them. function checkAccess(celDiv,id) { var celValue = $(celDiv).html(); if (celValue==1) $(celDiv).html("<input type='checkbox' value='"+$(celDiv).html()+"' checked disabled>") else $(celDiv).html("<input type='checkbox' value='"+$(celDiv).html()+"' disabled>") $(celDiv).click ( function() { $('input',this).each( function(){ tr_idx = $('#detFlex1 tbody tr').index($(this).parent().parent().parent()); td_idx = $('#detFlex1 tbody tr:eq('+tr_idx+') td').index($(this).parent().parent()); td_last = 13; for(var td=td_idx+1; td<=td_last;td++) { if ($(this).attr('checked') == true) { df[0].rows[tr_idx].cell[td_idx] = 1;//index[1] = Full Access if (td_idx==3) { df[0].rows[tr_idx].cell[td] = 1; } df[0].rows[tr_idx].cell[2] = 1; if (td_idx > 3) { df[0].rows[tr_idx].cell[2] = 1; } } else { df[0].rows[tr_idx].cell[td_idx] = 0;//index[0] = With Access if (td_idx==2) { df[0].rows[tr_idx].cell[td] = 0; } else if (td_idx==3) { df[0].rows[tr_idx].cell[td] = 0; } if (td_idx > 3) { df[0].rows[tr_idx].cell[3] = 0; } } } $('#detFlex1').flexAddData(df[0]); $('.toolbar a[title=Edit Item]').trigger('click'); }); } ); } I've thought that the problem is this above code. Could anyone help me simplify this code.?

    Read the article

  • Create table and call it from sql

    - by user1770816
    I have a PL/SQL function which creates a new temporary table. For creating the table I use execute immediate. When I run my function in oracle sql developer everything is ok; the function creates the temp table without errors. But when U use SQL: Select function_name from table_name I get an exceptions: ORA-14552: cannot perform a DDL, commit or rollback inside a query or DML ORA-06512: at "SYSTEM.GET_USERS", line 10 14552. 00000 - "cannot perform a DDL, commit or rollback inside a query or DML " *Cause: DDL operations like creation tables, views etc. and transaction control statements such as commit/rollback cannot be performed inside a query or a DML statement. Update Sorry, write from tablet PC and have problems with format text. My function: CREATE OR REPLACE FUNCTION GET_USERS ( USERID IN VARCHAR2 ) RETURN VARCHAR2 AS request VARCHAR2(520) := 'CREATE GLOBAL TEMPORARY TABLE '; BEGIN request := request || 'temp_table_' || userid || '(user_name varchar2(50), user_id varchar2(20), is_administrator varchar2(5)') || ' ON COMMIT PRESERVE ROWS'; EXECUTE IMMEDIATE (request); RETURN 'true'; END GET_USERS;

    Read the article

  • Multiple left joins, how to output in php

    - by Dan
    I have 3 tables I need to join. The contracts table is the main table, the 'jobs' and 'companies' table are extra info that can be associated to the contracts table. so, since I want all entries from my 'contracts' table, and the 'jobs' and 'companies' data only if it exists, I wrote the query like this.... $sql = "SELECT * FROM contracts LEFT JOIN jobs ON contracts.job_id = jobs.id LEFT JOIN companies ON contracts.company_id = companies.id ORDER BY contracts.end_date"; Now how would I output this in PHP? I tried this but kept getting an undefined error "Notice: Undefined index: contracts.id"... $sql_result = mysql_query($sql,$connection) or die ("Fail."); if(mysql_num_rows($sql_result) > 0){ while($row = mysql_fetch_array($sql_result)) { $contract_id = stripslashes($row['contracts.id']); $job_number = stripslashes($row['jobs.job_number']); $company_name = stripslashes($row['companies.name']); ?> <tr id="<?=$contract_id?>"> <td><?=$job_number?></td> <td><?=$company_name?></td> </tr> <? } }else{ echo "No records found"; } Any help is appreciated.

    Read the article

  • Suggestion on Database structure for relational data

    - by miccet
    Hi there. I've been wrestling with this problem for quite a while now and the automatic mails with 'Slow Query' warnings are still popping in. Basically, I have Blogs with a corresponding table as well as a table that keeps track of how many times each Blog has been viewed. This last table has a huge amount of records since this page is relatively high traffic and it logs every hit as an individual row. I have tried with indexes on the fields that are included in the WHERE clause, but it doesn't seem to help. I have also tried to clean the table each week by removing old ( 1.weeks) records. SO, I'm asking you guys, how would you solve this? The query that I know is causing the slowness is generated by Rails and looks like this: SELECT count(*) AS count_all FROM blog_views WHERE (created_at >= '2010-01-01 00:00:01' AND blog_id = 1); The tables have the following structures: CREATE TABLE IF NOT EXISTS 'blogs' ( 'id' int(11) NOT NULL auto_increment, 'name' varchar(255) default NULL, 'perma_name' varchar(255) default NULL, 'author_id' int(11) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, 'blog_picture_id' int(11) default NULL, 'blog_picture2_id' int(11) default NULL, 'page_id' int(11) default NULL, 'blog_picture3_id' int(11) default NULL, 'active' tinyint(1) default '1', PRIMARY KEY ('id'), KEY 'index_blogs_on_author_id' ('author_id') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ; And CREATE TABLE IF NOT EXISTS 'blog_views' ( 'id' int(11) NOT NULL auto_increment, 'blog_id' int(11) default NULL, 'ip' varchar(255) default NULL, 'created_at' datetime default NULL, 'updated_at' datetime default NULL, PRIMARY KEY ('id'), KEY 'index_blog_views_on_blog_id' ('blog_id'), KEY 'created_at' ('created_at') ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1 ;

    Read the article

  • Does this query fetch unnecessary information? Should I change the query?

    - by Camran
    I have this classifieds website, and I have about 7 tables in MySql where all data is stored. I have one main table, called "classifieds". In the classifieds table, there is a column called classified_id. This is not the PK, or a key whatsoever. It is just a number which is used for me to JOIN table records together. Ex: classifieds table: fordon table: id => 33 id => 12 classified_id => 10 classified_id => 10 ad_id => 'bmw_m3_92923' This above is linked together by the classified_id column. Now to the Q, I use this method to fetch all records WHERE the column ad_id matches any of the values inside an array, called in this case $ad_arr: SELECT mt.*, fordon.*, boende.*, elektronik.*, business.*, hem_inredning.*, hobby.* FROM classified mt LEFT JOIN fordon ON fordon.classified_id = mt.classified_id LEFT JOIN boende ON boende.classified_id = mt.classified_id LEFT JOIN elektronik ON elektronik.classified_id = mt.classified_id LEFT JOIN business ON business.classified_id = mt.classified_id LEFT JOIN hem_inredning ON hem_inredning.classified_id = mt.classified_id LEFT JOIN hobby ON hobby.classified_id = mt.classified_id WHERE mt.ad_id IN ('$ad_arr')"; Is this good or would this actually fetch unnecessary information? Check out this Q I posted couple of days ago. In the comments HLGEM is commenting that it is wrong etc etc. What do you think? http://stackoverflow.com/questions/2782275/another-rookie-question-how-to-implement-count-here Thanks

    Read the article

  • Issue changing innodb_log_file_size

    - by savageguy
    I haven't done much tweaking in the past so this might be relatively easy however I am running into issues. This is what I do: Stop MySQL Edit my.cnf (changing innodb_log_file_size) Remove ib_logfile0/1 Start MySQL Starts fine however all InnoDB tables have the .frm file is invalid error, the status shows InnoDB engine is disabled so I obviously go back, remove the change and everything works again. I was able to change every other variable I've tried but I can't seem to find out why InnoDB fails to start even after removing the log files. Am I missing something? Thanks. Edit: Pasting of the log below - looks like it still seems to find the log file even though they are not there? Shutdown: 090813 10:00:14 InnoDB: Starting shutdown... 090813 10:00:17 InnoDB: Shutdown completed; log sequence number 0 739268981 090813 10:00:17 [Note] /usr/sbin/mysqld: Shutdown complete Startup after making the changes: InnoDB: Error: log file ./ib_logfile0 is of different size 0 5242880 bytes InnoDB: than specified in the .cnf file 0 268435456 bytes! 090813 11:00:18 [Warning] 'user' entry '[email protected]' ignored in --skip-name-resolve mode. 090813 11:00:18 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.81-community-log' socket: '/var/lib/mysql/mysql.sock' port: 3306 MySQL Community Edition (GPL) 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' 090813 11:00:19 [ERROR] /usr/sbin/mysqld: Incorrect information in file: './XXXX/User.frm' Its just a spam of the same error until I correct it When it did start after it recreated the log files so it must be looking in the same spot I am.

    Read the article

  • Image swaping with click on <tr>

    - by Jonny Sooter
    I've edited this piece of code from the Data Tables Plugin witch makes a tr click able and pops open another tr with details from the clicked tr. This piece of code is the event listener for opening and closing. When "details" are open the img should be images/details_close.png and they are closed the img should be images/details_open.png and have a swap occurring when it opens and closes when clicked. What is happening is there is no swap going on when its open or closed I still only get the "details_open.png". I don't knoe if I'm just not selecting the img tag properly or whats going on. Link to project: http://www2.kent.k12.wa.us/ksd/it/www/mobile/elementary.html $('#example tbody').on('click', 'td', function (e) { var myImage = $(this).find("img"); var nTr = $(this).parents('tr')[0]; if ( oTable.fnIsOpen(nTr) ) { /* This row is already open - close it */ myImage.src = "images/details_open.png"; oTable.fnClose( nTr ); } else { /* Open this row */ myImage.src = "images/details_close.png"; oTable.fnOpen( nTr, fnFormatDetails(oTable, nTr), 'details' ); } } );

    Read the article

  • Index question: Select * with WHERE clause. Where and how to create index

    - by Mestika
    Hi, I’m working on optimizing some of my queries and I have a query that states: select * from SC where c_id ="+c_id” The schema of ** SC** looks like this: SC ( c_id int not null, date_start date not null, date_stop date not null, r_t_id int not null, nt int, t_p decimal, PRIMARY KEY (c_id, r_t_id, date_start, date_stop)); My immediate bid on how the index should be created is a covering index in this order: INDEX(c_id, date_start, date_stop, nt, r_t_id, t_p) The reason for this order I base on: The WHERE clause selects from c_id thus making it the first sorting order. Next, the date_start and date_stop to specify a sort of “range” to be defined in these parameters Next, nt because it will select the nt Next the r_t_id because it is a ID for a specific type of my r_t table And last the t_p because it is just a information. I don’t know if it is at all necessary to order it in a specific way when it is a SELECT ALL statement. I should say, that the SC is not the biggest table. I can say how many rows it contains but a estimate could be between <10 and 1000. The next thing to add is, that the SC, in different queries, inserts the data into the SC, and I know that indexes on tables which have insertions can be cost ineffective, but can I somehow create a golden middle way to effective this performance. Don't know if it makes a different but I'm using IBM DB2 version 9.7 database Sincerely Mestika

    Read the article

  • Grails one-to-many mapping with joinTable

    - by intargc
    I have two domain-classes. One is a "Partner" the other is a "Customer". A customer can be a part of a Partner and a Partner can have 1 or more Customers: class Customer { Integer id String name static hasOne = [partner:Partner] static mapping = { partner joinTable:[name:'PartnerMap',column:'partner_id',key:'customer_id'] } } class Partner { Integer id static hasMany = [customers:Customer] static mapping = { customers joinTable:[name:'PartnerMap',column:'customer_id',key:'partner_id'] } } However, whenever I try to see if a customer is a part of a partner, like this: def customers = Customer.list() customers.each { if (it.partner) { println "Partner!" } } I get the following error: org.springframework.dao.InvalidDataAccessResourceUsageException: could not execute query; SQL [select this_.customer_id as customer1_162_0_, this_.company as company162_0_, this_.display_name as display3_162_0_, this_.parent_customer_id as parent4_162_0_, this_.partner_id as partner5_162_0_, this_.server_id as server6_162_0_, this_.status as status162_0_, this_.vertical_market as vertical8_162_0_ from Customer this_]; nested exception is org.hibernate.exception.SQLGrammarException: could not execute query It looks as if Grails is thinking partner_id is a part of the Customer query, and it's not... It is in the PartnerMap table, which is supposed to find the customer_id, then get the Partner from the corresponding partner_id. Anyone have any clue what I'm doing wrong? Edit: I forgot to mention I'm doing this with legacy database tables. So I have a Partner, Customer and PartnerMap table. PartnerMap has simply a customer_id and partner_id field.

    Read the article

  • Run multiple MySQL queries based on a series of ifs

    - by OldWest
    I am just getting started on this complex query I need to write and was hoping for any suggestions or feedback regarding table structure and the actual query itself.. I've already created my tables and populated test data, and now just trying to sort out how and what is possible within MySQL. Here is an outline of the problem: End result: Listing of rates based on specific queried criteria (see below): Age: [ 27 ] Spouse Age: [ 25 ] Num of Children: [ 3 ] Zip Code: [ 97128 ] The problem I am running into is each company that provides rates has a unique way of dealing with the rate. And I am looking for the best approach for multiple queries based on the company (one query with results for each company more or less all combined into one result set). Here are some facts: - Each company deals with zip code ranges which assist in the query result. - Each company has a different method of calculating the rate based on the Applicant, Spouse, Num of Children: Example, a) Company A determines rate by: Applicant + Spouse + Child(ren) = rate (age is pertinent to the applicant within a range). b) Company B determines the rate by total number of applicants like: 1, 2, 3, 4, 5, 6+ = rate (and age is ignored). First off, what would I call this type of query? Multiple nested query? And should I intertwine php within it to determine the If()s ... I apologize if this thread lacks sufficient data, so please tell me anything you would like to see.

    Read the article

  • How do I execute queries upon DB connection in Rails?

    - by sycobuny
    I have certain initializing functions that I use to set up audit logging on the DB server side (ie, not rails) in PostgreSQL. At least one has to be issued (setting the current user) before inserting data into or updating any of the audited tables, or else the whole query will fail spectacularly. I can easily call these every time before running any save operation in the code, but DRY makes me think I should have the code repeated in as few places as possible, particularly since this diverges greatly from the ideal of database agnosticism. Currently I'm attempting to override ActiveRecord::Base.establish_connection in an initializer to set it up so that the queries are run as soon as I connect automatically, but it doesn't behave as I expect it to. Here is the code in the initializer: class ActiveRecord::Base # extend the class methods, not the instance methods class << self alias :old_establish_connection :establish_connection # hide the default def establish_connection(*args) ret = old_establish_connection(*args) # call the default # set up necessary session variables for audit logging # call these after calling default, to make sure conn is established 1st db = self.class.connection db.execute("SELECT SV.set('current_user', 'test@localhost')") db.execute("SELECT SV.set('audit_notes', NULL)") # end "empty variable" err ret # return the default's original value end end end puts "Loaded custom establish_connection into ActiveRecord::Base" sycobuny:~/rails$ ruby script/server = Booting WEBrick = Rails 2.3.5 application starting on http://0.0.0.0:3000 Loaded custom establish_connection into ActiveRecord::Base This doesn't give me any errors, and unfortunately I can't check what the method looks like internally (I was using ActiveRecord::Base.method(:establish_connection), but apparently that creates a new Method object each time it's called, which is seemingly worthless cause I can't check object_id for any worthwhile information and I also can't reverse the compilation). However, the code never seems to get called, because any attempt to run a save or an update on a database object fails as I predicted earlier. If this isn't a proper way to execute code immediately on connection to the database, then what is?

    Read the article

  • Django Many-to-Many Question

    - by DZ
    My questions seems like a common problem that when I have seen any questions on it is never really asked right or not answered. So Im going to try to get the question right, and maybe someone knows how to resolve the issue, or correct my understanding. The problem: When you have a many-to-many relation ship (related_name not through) and you are trying to use the admin interface you are required to input one of the rleationships even though it does not have to exsist for you to create the first entry. Meaning you have to assign a group to an event to create the group. Wow that sounds complicated. So I can see why the question is not getting answered. Lets try the non code explanation example... First and important versions: Django 1.1.1 Phython 2.6 So I have a model where I created a many-to-many realtionship and Im using the related_name Im creating an app that is an event organizer, for simplicty lets say events although they could be anytype). For this first post Im going to stay away from the code and just try to explain. A few keys: (explaining comment) ** - many-to-many So in the model we have 1) The Main Event (this is main model) 2) Groups (link to events and their can be many events for a group) a) Events** I have simplified this example a little becuase I recognize that what does it matter. Just create the event first... But there are specific varations where that will not work. What the many-to-many related_name does it created another table with the indecies of the two other tables. Nothing says that this extra table HAS to be populated. Becuase if I look in the database and work within myPHPadmin I can create a group with out registering an event, since the connection between the two is a seperate table the DB does not care. How do I make the admin interface this realize it? Ok I know thats a lot so I hope I have explained it clearly. Thank you anyone for your comments/thoughts/advice

    Read the article

  • Update table using SSIS

    - by thursdaysgeek
    I am trying to update a field in a table with data from another table, based on a common key. If it were in straight SQL, it would be something like: Update EHSIT set e.IDMSObjID = s.IDMSObjID from EHSIT e, EHSIDMS s where e.SITENUM = s.SITE_CODE However, the two tables are not in the same database, so I'm trying to use SSIS to do the update. Oh, and the sitenum/site_code are varchar in one and nvarchar in the other, so I'll have to do a data conversion so they'll match. How do I do it? I have a data flow object, with the source as EHSIDMS and the destination as EHSIT. I have a data conversion to convert the unicode to non-unicode. But how do I update based on the match? I've tried with the destination, using a SQL Command as the Data Access mode, but it doesn't appear to have the source table. If I just map the field to be updated, how does it limit it based on fields matching? I'm about to export my source table to Excel or something, and then try inputting from there, although it seems that all that would get me would be to remove the data conversion step. Shouldn't there be an update data task or something? Is it one of those Data Flow transformation tasks, and I'm just not figuring out which it is?

    Read the article

  • Best way to build an application based on R?

    - by Prasad Chalasani
    I'm looking for suggestions on how to go about building an application that uses R for analytics, table generation, and plotting. What I have in mind is an application that: displays various data tables in different tabs, somewhat like in Excel, and the columns should be sortable by clicking. takes user input parameters in some dialog windows. displays plots dynamically (i.e. user-input-dependent) either in a tab or in a new pop-up window/frame Note that I am not talking about a general-purpose fron-end/GUI for exploring data with R (like say Rattle), but a specific application. Some questions I'd like to see addressed are: Is an entirely R-based approach even possible ( on Windows ) ? The following passage from the Rattle article in R-Journal intrigues me: It is interesting to note that the first implementation of Rattle actually used Python for implementing the callbacks and R for the statistics, using rpy. The release of RGtk2 allowed the interface el- ements of Rattle to be written directly in R so that Rattle is a fully R-based application If it's better to use another language for the GUI part, which language is best suited for this? I'm looking for a language where it's relatively "painless" to build the GUI, and that also integrates very well with R. From this StackOverflow question How should I do rapid GUI development for R and Octave methods (possibly with Python)? I see that Python + PyQt4 + QtDesigner + RPy2 seems to be the best combo. Is that the consensus ? Anyone have pointers to specific (open source) applications of the type I describe, as examples that I can learn from?

    Read the article

  • Best way to update/insert into a table based on a remote table.

    - by martilyo
    I have two very large enterprise tables in an Oracle 10g database. One table keeps the historical information of the other table. The problem is, I'm getting to the point where the records are just too many that my insert update is taking too long and my session is getting killed by the governor. Here's a pseudocode of my update process: sqlsel := 'SELECT col1, col2, col3, sysdate FROM table2@remote_location dpi WHERE (col1, col2, col3) IN ( SELECT col1, col2, col3 FROM table2@remote_location MINUS SELECT DISTINCT col1, col2, col3 FROM table1 mpc WHERE facility = '''||load_facility||''' )'; EXECUTE IMMEDIATE sqlsel BULK COLLECT INTO table1; I've tried the MERGE statement: MERGE INTO table1 t1 USING ( SELECT col1, col2, col3 FROM table2@remote_location ) t2 ON ( t1.col1 = t2.col1 AND t1.col2 = t2.col2 AND t1.col3 = t2.col3 ) WHEN NOT MATCHED THEN INSERT (t1.col1, t1.col2, t1.col3, t1.update_dttm ) VALUES (t2.col1, t2.col2, t2.col3, sysdate ) But there seems to be a confirmed bug on versions prior to Oracle 10.2.0.4 on the merge statement when doing a merge using a remote database. The chance of getting an enterprise upgrade is slim so is there a way to further optimize my first query or write it in another way to have it run best performance wise? Thanks.

    Read the article

  • Linq. Help me tune this!

    - by dtrick
    I have a linq query that is causing some timeout issues. Basically, I have a query that is returning the top 100 results from a table that has approximately 500,000 records. Here is the query: using (var dc = CreateContext()) { var accounts = string.IsNullOrEmpty(searchText) ? dc.Genealogy_Accounts .Where(a => a.Genealogy_AccountClass.Searchable) .OrderByDescending(a => a.ID) .Take(100) : dc.Genealogy_Accounts .Where(a => (a.Code.StartsWith(searchText) || a.Name.StartsWith(searchText)) && a.Genealogy_AccountClass.Searchable) .OrderBy(a => a.Code) .Take(100); return accounts.Select(a => } } Oddly enough it is the first linq query that is causing the timeout. I thought that by doing a 'Take' we wouldn't need to scan all 500k of records. However, that must be what is happening. I'm guessing that the join to find what is 'searchable' is causing the issue. I'm not able to denormalize the tables... so I'm wondering if there is a way to rewrite the linq query to get it to return quicker... or if I should just write this query as a Stored Procedure (and if so, what might it look like). Thanks.

    Read the article

  • SELECT product from subclass: How many queries do I need?

    - by Stefano
    I am building a database similar to the one described here where I have products of different type, each type with its own attributes. I report a short version for convenience product_type ============ product_type_id INT product_type_name VARCHAR product ======= product_id INT product_name VARCHAR product_type_id INT -> Foreign key to product_type.product_type_id ... (common attributes to all product) magazine ======== magazine_id INT title VARCHAR product_id INT -> Foreign key to product.product_id ... (magazine-specific attributes) web_site ======== web_site_id INT name VARCHAR product_id INT -> Foreign key to product.product_id ... (web-site specific attributes) This way I do not need to make a huge table with a column for each attribute of different product types (most of which will then be NULL) How do I SELECT a product by product.product_id and see all its attributes? Do I have to make a query first to know what type of product I am dealing with and then, through some logic, make another query to JOIN the right tables? Or is there a way to join everything together? (if, when I retrieve the information about a product_id there are a lot of NULL, it would be fine at this point). Thank you

    Read the article

  • Matching on search attributes selected by customer on front end

    - by CodeNinja1974
    I have a method in a class that allows me to return results based on a certain set of Customer specified criteria. The method matches what the Customer specifies on the front end with each item in a collection that comes from the database. In cases where the customer does not specify any of the attributes, the ID of the attibute is passed into the method being equal to 0 (The database has an identity on all tables that is seeded at 1 and is incremental). In this case that attribute should be ignored, for example if the Customer does not specify the Location then customerSearchCriteria.LocationID = 0 coming into the method. The matching would then match on the other attributes and return all Locations matching the other attibutes, example below: public IEnumerable<Pet> FindPetsMatchingCustomerCriteria(CustomerPetSearchCriteria customerSearchCriteria) { if(customerSearchCriteria.LocationID == 0) { return repository.GetAllPetsLinkedCriteria() .Where(x => x.TypeID == customerSearchCriteria.TypeID && x.FeedingMethodID == customerSearchCriteria.FeedingMethodID && x.FlyAblityID == customerSearchCriteria.FlyAblityID ) .Select(y => y.Pet); } } The code for when all criteria is specified is shown below: private PetsRepository repository = new PetsRepository(); public IEnumerable<Pet> FindPetsMatchingCustomerCriteria(CustomerPetSearchCriteria customerSearchCriteria) { return repository.GetAllPetsLinkedCriteria() .Where(x => x.TypeID == customerSearchCriteria.TypeID && x.FeedingMethodID == customerSearchCriteria.FeedingMethodID && x.FlyAblityID == customerSearchCriteria.FlyAblityID && x.LocationID == customerSearchCriteria.LocationID ) .Select(y => y.Pet); } I want to avoid having a whole set of if and else statements to cater for each time the Customer does not explicitly select an attribute of the results they are looking for. What is the most succint and efficient way in which I could achieve this?

    Read the article

  • EF 4.0 Guid or Int as A primary Key

    - by bigb
    I am Implementing custom ASPNetMembership using EF 4.0 Is there any reason why i should use Guid as a primary key in User tables? As far as i know Int as a PK on SQL Server more performanced than strings. And Int is easier to iterate. Also, for security purpose if i need to pass this it id somewhere in url i may encrypt it somehow and pass it like a strings with no probs. But if i want to use auto generated Guid on SQL Server side using EF 4.0 i need to do this trick http://leedumond.com/blog/using-a-guid-as-an-entitykey-in-entity-framework-4/ I can't see any cases why i should use Guid as PK, may be only one if system going to have millions ans millions users, but also, theoretically, Guid could be duplicated sometime isn't so? Anyway Int32 size is 2,147.483.647 it is pretty much even for very-very big system, but if this number is still not enough I may go with Int64, in that cases I may have 9,223.372.036.854.775.807 rows. Pretty much huh? From another hand, M$ using Guids as PK in their ASPNetMembership implementation. [aspnetdb].[aspnet_Users] - PK UserId Type uniqueidentifier, should be some reasons/explanation why the did it?! May be some one has any ideas/experience about that?

    Read the article

  • Best way to limit results in MySQL with user subcategories

    - by JM4
    I am trying to essentially solve for the following: 1) Find all users in the system who ONLY have programID 1. 2) Find all users in the system who have programID 1 AND any other active program. My tables structures (in very simple terms are as follows): users userID | Name ================ 1 | John Smith 2 | Lewis Black 3 | Mickey Mantle 4 | Babe Ruth 5 | Tommy Bahama plans ID | userID | plan | status --------------------------- 1 | 1 | 1 | 1 2 | 1 | 2 | 1 3 | 1 | 3 | 1 4 | 2 | 1 | 1 5 | 2 | 3 | 1 6 | 3 | 1 | 0 7 | 3 | 2 | 1 8 | 3 | 3 | 1 9 | 3 | 4 | 1 10 | 4 | 2 | 1 11 | 4 | 4 | 1 12 | 5 | 1 | 1 I know I can easily find all members with a specific plan with something like the following: SELECT * FROM users a JOIN plans b ON (a.userID = b.userID) WHERE b.plan = 1 AND b.status = 1 but this will only tell me which users have an 'active' plan 1. How can I tell who ONLY has plan 1 (in this case only userID 5) and how to tell who has plan 1 AND any other active plan? Update: This is not to get a count, I will actually need the original member information, including all the plans they have so a COUNT(*) response may not be what I'm trying to achieve.

    Read the article

  • Java: over-typed structures?

    - by HH
    Term over-type structure = a data structure that accepts different types, can be primitive or user-defined. I think ruby supports many types in structures such as tables. I tried a table with types 'String', 'char' and 'File' in Java but errs. How can I have over-typed structure in Java? How to show types in declaration? What about in initilization? Suppose a structure: INDEX VAR FILETYPE //0 -> file FILE //1 -> lineMap SizeSequence //2 -> type char //3 -> binary boolean //4 -> name String //5 -> path String Code import java.io.*; import java.util.*; public class Object { public static void print(char a) { System.out.println(a); } public static void print(String s) { System.out.println(s); } public static void main(String[] args) { Object[] d = new Object[6]; d[0] = new File("."); d[2] = 'T'; d[4] = "."; print(d[2]); print(d[4]); } } Errors Object.java:18: incompatible types found : java.io.File required: Object d[0] = new File("."); ^ Object.java:19: incompatible types found : char required: Object d[2] = 'T'; ^

    Read the article

  • Merging ILists to bind on datagridview to avoid using a database view

    - by P.Bjorklund
    In the form we have this where IntaktsBudgetsType is a poorly named enum that only specifies wether to populate the datagridview after customer or product (You do the budgeting either after product or customer) private void UpdateGridView() { bs = new BindingSource(); bs.DataSource = intaktsbudget.GetDataSource(this.comboBoxKundID.Text, IntaktsBudgetsType.PerKund); dataGridViewIntaktPerKund.DataSource = bs; } This populates the datagridview with a database view that merge the product, budget and customer tables. The logic has the following method to get the correct set of IList from the repository which only does GetTable<T>.ToList<T> public IEnumerable<IntaktsBudgetView> GetDataSource(string id, IntaktsBudgetsType type) { IList<IntaktsBudgetView> list = repository.SelectTable<IntaktsBudgetView>(); switch (type) { case IntaktsBudgetsType.PerKund: return from i in list where i.kundId == id select i; case IntaktsBudgetsType.PerProdukt: return from i in list where i.produktId == id select i; } return null; } Now I don't want to use a database view since that is read-only and I want to be able to perform CRUD actions on the datagridview. I could build a class that acts as a wrapper for the whole thing and bind the different table values to class properties but that doesn't seem quite right since I would have to do this for every single thing that requires "the merge". Something pretty important (and probably basic) is missing the the thought process but after spending a weekend on google and in books I give up and turn to the SO community.

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >