Search Results

Search found 5233 results on 210 pages for 'records'.

Page 145/210 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • SQL Server 2008 Insert performance issue

    - by mithiya
    is there any way to increase performance of SQL server inserts, as you can see below i have used below sql 2005, 2008 and oracle. i am moving data from ORACLe to SQL. while inserting data to SQL i am using a procedure. insert to Oracles is very fast in compare to SQL, is there any way increase performance. or a better way to move data from Oracle to SQL (data size approx 100000 records an hour) please find below stats as i gathered, RUN1 and RUN2 time is in millisecond.

    Read the article

  • export excel taking long time from ASP pages?

    - by ricky
    i am using following code for export to excel from .ASP page? GMID = Request.QueryString ("GMID") Response.Buffer = False Response.ContentType = "application/vnd.ms-excel" DIR_YR = Request.QueryString ("DIR_YR") CD = Request.QueryString("CD") YEAR = Request.QueryString("IND") Problem that I am facing is that When records are around 2,000 or more{ export to excel ask for open option .When i click on that option only Download in progress... shown but actually no excel pop up will open .How can I fixed this bug because for 700-800 rows its working Fine. I am not looking for whole change codes because there is a problem with only One Sale rep who is having more than 2000 rows.I am looking for one or two rows changes.

    Read the article

  • Left Join works with table but fails with query

    - by Frank Martin
    The following left join query in MS Access 2007 SELECT Table1.Field_A, Table1.Field_B, qry_Table2_Combined.Field_A, qry_Table2_Combined.Field_B, qry_Table2_Combined.Combined_Field FROM Table1 LEFT JOIN qry_Table2_Combined ON (Table1.Field_A = qry_Table2_Combined.Field_A) AND (Table1.Field_B = qry_Table2_Combined.Field_B); is expected by me to return this result: +--------+---------+---------+---------+----------------+ |Field_A | Field_B | Field_A | Field_B | Combined_Field | +--------+---------+---------+---------+----------------+ |1 | | | | | +--------+---------+---------+---------+----------------+ |1 | | | | | +--------+---------+---------+---------+----------------+ |2 |1 |2 |1 |John, Doe | +--------+---------+---------+---------+----------------+ |2 |2 | | | | +--------+---------+---------+---------+----------------+ [Table1] has 4 records, [qry_Table2_Combined] has 1 record. But it gives me this: +--------+---------+---------+---------+----------------+ |Field_A | Field_B | Field_A | Field_B | Combined_Field | +--------+---------+---------+---------+----------------+ |2 |1 |2 |1 |John, Doe | +--------+---------+---------+---------+----------------+ |2 |2 |2 | |, | +--------+---------+---------+---------+----------------+ Really weird is that the [Combined_Field] has a comma in the second row. I use a comma to concatenate two fields in [qry_Table2_Combined]. If the left join query uses a table created from the query [qry_Table2_Combined] it works as expected. Why does this left join query not give the same result for a query and a table? And how can i get the right results using a query in the left join?

    Read the article

  • Access Validation Rule Violations on Append Query

    - by Jacques Tardie
    I'm recieving the following error on trying to run an append query in access. Microsoft Office Access set .... and it didnt't add... 779280 records(s) due to validation rule violations. If I choose to run the query anyways, nothing actually happens. To give some context, I'm simply trying to copy a populated field, consisting of values similar to "16-2009-02, 34-2010-02, et cetera" to another currently unpopulated field. The fields themselves have no set validation rules, and both have the standard text field options. I'm hoping to be able to simply remove those hyphens, and fix the issue. But I guess that's what I'm not sure about, are those hyphens actually a problem? Running SP3 w/ Access 2003. Thanks in advance!

    Read the article

  • FieldError when annotating over foreign keys

    - by X_9
    I have a models file that looks similar to the following: class WithDate(models.Model): adddedDate = models.DateTimeField(auto_now_add=True) modifiedDate = models.DateTimeField(auto_now=True) class Meta: abstract = True class Match(WithDate): ... class Notify(WithDate): matchId = models.ForeignKey(Match) headline = models.CharField(null=True, blank=True, max_length=10) For each Match I'm trying to get a count of notify records that have a headline. So my call looks like matchObjs = Match.objects.annotate(notifies_made=Count('notify__headline__isnull')) This keeps throwing a FieldError. I've simplified the query down to matchObjs = Match.objects.annotate(notifies_made=Count('notify')) And I still get the same FieldError... I've seen this work in other cases (other documentation, other SO questions like this one) but I can't figure out why I'm getting an error. The specific error that is returned is as follows: Cannot resolve keyword 'notify' into field. Choices are: (all fields from Match model) Does anyone have a clue as to why I can't get this annotation to work across tables? I'm baffled after looking at the other SO question and various Django docs where I've seen this done. Edit: I am using Django 1.1.1

    Read the article

  • Build SUM based daily record

    - by ximarin
    I have a problem building an aggregate function. Here's my problem: I have a table like this id action day isSum difference 1 ping 2012-01-01 1 500 (this is the sum of the differences from last year) 2 ping 2012-01-01 0 -2 3 ping 2012-01-02 0 1 4 ping 2012-01-03 0 -4 5 ping 2012-01-04 0 -2 6 ping 2012-01-05 0 3 7 ping 2012-01-06 0 2 8 ping 2012-01-01 1 0 (this is the sum of the differences from last year, now for pong) 9 pong 2012-01-01 0 -5 10 pong 2012-01-02 0 2 11 pong 2012-01-03 0 -2 12 pong 2012-01-04 0 -8 13 pong 2012-01-05 0 3 14 pong 2012-01-06 0 4 I now need to select the action, day and the summarized difference since 01-01 for every day, so that my result looks like this action day total ping 2012-01-01 498 ping 2012-01-02 499 ping 2012-01-03 495 ping 2012-01-04 493 ping 2012-01-05 496 ping 2012-01-06 498 pong 2012-01-01 - 5 pong 2012-01-02 - 3 pong 2012-01-03 - 5 pong 2012-01-04 -13 pong 2012-01-05 -10 pong 2012-01-06 - 6 How can I do this? there a are a lot of datasets (~1 million), so the query needs to be pretty cheap. I don't know how the use sum to get daily sums for daily records depending on the action-column.

    Read the article

  • Scala case class generated field value

    - by Petteri Hietavirta
    I have an existing Scala application and it uses case classes which are then persisted in MongoDB. I need to introduce a new field to a case class but the value of it is derived from existing field. For example, there is phone number and I want to add normalised phone number while keeping the original phone number. I'll update the existing records in MongoDB but I would need to add this normalisation feature to existing save and update code. So, is there any nice shortcut in Scala to add a "hook" to a certain field of a case class? For example, in Java one could modify setter of the phone number.

    Read the article

  • How to use group by for grouping varchar data.

    - by Shantanu Gupta
    I have a table that contains some data given below pk_map_id preferences ImmediateParent Department_Id -------------------- -------------------- -------------------- -------------------- 20 14 5 1 21 15 5 1 22 16 6 1 23 9 4 2 24 4 3 2 25 24 20 2 26 25 20 2 27 23 13 2 I want to group my records on behalf of department then immediate parent then preferences each seperated by ',' i.e. department Immediate Parent preferences 1 5,6 14,15,16 2 4,3,20,13 9,4,24,25,23 and this table also Immediate parent preferences 5 14,15 6 16 4 9 3 4 20 24,25 13 13 In actual scenario all these are my ids which are to be replaced by their string fields. I am using sql server 2k5

    Read the article

  • What's the proper way to use sqlite on the iPhone?

    - by Elliot Chen
    Hi, Experts: Can you please give some suggestions on sqlite using on the iPhone? Within my application, I use a sqlite DB to store all local data. Two methods can be used to retrieve those data during running time. 1, Load all the data into memory at initialization stage. (More memory used, less DB open/close operation needed) 2, Read corresponding records when necessary, free the occupied memory after using. (Good habit for memory using, but much DB open/close operations needs). I prefer to use method 2, but not sure whether too many DB opening/closing operations could affect app's efficiency. Or do you think I can 'upgrade' method 2 by opening DB when app launches and closing DB when app quits? Thanks for your suggestions very much!

    Read the article

  • Overriding unique indexed values

    - by Yeti
    This is what I'm doing right now (name is UNIQUE): SELECT * FROM fruits WHERE name='apple'; Check if the query returned any result. If yes, don't do anything. If no, a new value has to be inserted: INSERT INTO fruits (name) VALUES ('apple'); Instead of the above is it ok to insert the value into the table without checking if it already exists? If the name already exists in the table, an error will be thrown and if it doesn't, a new record will be inserted. Right now I am having to insert 500 records in a for loop, which results in 1000 queries. Will it be ok to skip the "already-exists" check?

    Read the article

  • Linq query: append column to query results

    - by jrubengb
    I am trying to figure out how to append a column to Linq query results based on the max value of the query. Essentially, I want to create an EnumerableRowCollection of DataRows that would include a max value record with the same value for each record. So if i have a hundred records returned through the query, I want to next calculate the max value of one of the fields, then append that max value to the original query table: DataTable dt = new DataTable(); dt = myDataSet.myDataTable; EnumerableRowCollection<DataRow> qrySelectRecords = (from d in dt.AsEnumerable() where d.Field<DateTime>("readingDate") >= startDate && g.Field<DateTime>("readingDate") <= endDate select d); Here's where I need help: double maxValue = qrySelectRecords.Field<double>("field1").Max(); foreach (DataRow dr in qrySelectRecords) { qrySelectRecords.Column.Append(maxValue) }

    Read the article

  • MSSql Query solution cum Suggestion Required

    - by Nirmal
    Hello All... I have a following scenario in my MSSql 2005 database. zipcodes table has following fields and value (just a sample): zipcode latitude longitude ------- -------- --------- 65201 123.456 456.789 65203 126.546 444.444 and "place" table has following fields and value : id name zip latitude longitude -- ---- --- -------- --------- 1 abc 65201 NULL NULL 2 def 65202 NULL NULL 3 ghi 65203 NULL NULL 4 jkl 65204 NULL NULL Now, my requirement is like I want to compare my zip codes of "place" table and update the available latitude and longitude fields from "zipcode" table. And there are some of the zipcodes which has no entry in "zipcode" table, so that should remain null. And the major issue is like I have more then 50,00,000 records in my db. So, query should support this feature. I have tried some of the solutions but unfortunately not getting proper output. Any help would be appreciated...

    Read the article

  • Ruby: would using Fibers increase my DB insert throughput?

    - by Zombies
    Currently I am using Ruby 1.9.1 and the 'ruby-mysql' gem, which unlike the 'mysql' gem is written in ruby only. This is pretty slow actually, as it seems to insert at a rate of almost 1 per second (SLOOOOOWWWWWW). And I have a lot of inserts to make too, its pretty much what this script does ultamitely. I am using just 1 connection (since I am using just one thread). I am hoping to speed things up by creating a fiber that will create a new DB connection insert 1-3 records close the DB connection I would imagine launching 20-50 of these would greatly increase DB throughput. Am I correct to go along this route? I feel that this is the best option, as opposed to refactoring all of my DB code :(

    Read the article

  • Oracle (PL/SQL): Is UPDATE RETURNING concurrent?

    - by Jaap
    I'm using table with a counter to ensure unique id's on a child element. I know it is usually better to use a sequence, but I can't use it because I have a lot of counters (a customer can create a couple of buckets and each of them needs to have their own counter, they have to start with 1 (it's a requirement, my customer needs "human readable" keys). I'm creating records (let's call them items) that have a prikey (bucket_id, num = counter). I need to guarantee that the bucket_id / num combination is unique (so using a sequence as prikey won't fix my problem). The creation of rows doesn't happen in pl/sql, so I need to claim the number (btw: it's not against the requirements to have gaps). My solution was: UPDATE bucket SET counter = counter + 1 WHERE id = param_id RETURNING counter INTO num_forprikey; PL/SQL returns var_num_forprikey so the item record can be created. Question: Will I always get unique num_forprikey even if the user concurrently asks for new items in a bucket?

    Read the article

  • ASP.NET MVC: View does not get rendered with updated model value

    - by Newton
    I'm experiencing the a challenge in populating the child records. My previous code was like - <%= Html.TextBox("DyeOrder.Summary[" + i + "].Ratio", Model.DyeOrder.Summary[i].Ratio.ToString("#0.00"), ratioProperties) %> This code does not render the updated values after post back. To resolve this issue my work around was like - <%= "<input id='DyeOrder_Summary_" + i + "__Ratio' name='DyeOrder.Summary[" + i + "].Ratio' value='" + Model.DyeOrder.Summary[i].Ratio.ToString("#0.00") + "' " + ratioCss + " type='text' />"%> This is very clumsy to me. Is there any better ideas...

    Read the article

  • two php arrays - sort one array with the value order of another

    - by Tisch
    Hi there, I have two PHP arrays like so: Array of X records containing the ID of Wordpress posts (in a particular order) Array of Wordpress posts The two arrays look something like this: Array One (Sorted Custom Array of Wordpress Post IDs) Array ( [0] => 54 [1] => 10 [2] => 4 ) Array Two (Wordpress Post Array) Array ( [0] => stdClass Object ( [ID] => 4 [post_author] => 1 ) [1] => stdClass Object ( [ID] => 54 [post_author] => 1 ) [2] => stdClass Object ( [ID] => 10 [post_author] => 1 ) ) I would like to sort the array of wordpress posts with the order of the ID's in the first array. I hope this makes sense, and thanks in advance of any help. Tom edit: The server is running PHP Version 5.2.14

    Read the article

  • Huge page buffer vs. multiple simultaneous processes

    - by Andrei K.
    One of our customer has a 35 Gb database with average active connections count about 70-80. Some tables in database have more than 10M records per table. Now they have bought new server: 4 * 6 Core = 24 Cores CPU, 48 Gb RAM, 2 RAID controllers 256 Mb cache, with 8 SAS 15K HDD on each. 64bit OS. I'm wondering, what would be a fastest configuration: 1) FB 2.5 SuperServer with huge buffer 8192 * 3500000 pages = 29 Gb or 2) FB 2.5 Classic with small buffer of 1000 pages. Maybe some one has tested such case before and will save me days of work :) Thanks in advance.

    Read the article

  • SSRS Column Grouping with specific order

    - by AmiT
    Hi Experts, Is it possible to change order of records/groups in a result-set from a query using Group By? =I have a query: SELECT Category, Subcategory, ProductName, CreatedDate, Sales From TableCategory tc INNER JOIN TableSubCategory ts ON tc.col1 = ts.col2 INNER JOIN TableProductName tp ON ts.col2 = tp.col3 Group By Category, SubCategory, ProductName, CreatedDate, Sales = Now, I am creating a ssrs report where Category is Primary row group, then SubCategory is its child row group. Then ProductName is a Primary Column Group. It works perfect, But it shows the ProductNames in alphabatic order. I want it to show the ProductNames in custom order(defined by me).Like, ProductNo5 in 3rd column, ProductNo8 in 4th column, ProductNo1 in 5th column ... and so on!

    Read the article

  • A linq join combined with a regex

    - by Geert Beckx
    Is it possible to combine these 2 queries or would this make my code too complex? Also I think there should be a performance gain by combining these queries since I think in the near future my source table could be over 11000 records. This is what i came up with so far : Dim lit As LiteralControl ' check characters not in alphabet Dim r As New Regex("^[^a-zA-Z]+") Dim query = From o In source.ToTable _ Where r.IsMatch(o.Field(Of String)("nam")) lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", "0-9", query.Count)) plhAlpabetLinks.Controls.Add(lit) Dim q = From l In "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToLower.ToCharArray _ Group Join o In source.ToTable _ On l Equals o.Field(Of String)("nam").ToLowerInvariant(0) Into g = Group _ Select l, g.Count ' iterate the alphabet to generate all the links. For Each letter In q.AsEnumerable lit = New LiteralControl(String.Format("letter: {0}, count: {1}<br />", letter.l, letter.Count)) plhAlpabetLinks.Controls.Add(lit) Next Kind regards, G.

    Read the article

  • Filtering results and pagination

    - by alj
    I have a template that shows a filter form and below it a list of the result records. I bind the form to the request so that the filter form sets itself to the options the user submitted when the results are returned. I also use pagination. Using the code in the pagination documentation means that when the user clicks for the next page, the form data is lost. What is the best way of dealing with pagination and filtering in this way? Passing the querystring to the paginiation links. Change the pagination links to form buttons and therefore submit the filter form at the same time, but this assumes that the user hasn't messed about with the filter options. As above but with the original data as hidden fields. ALJ

    Read the article

  • Need help on nested loop of queries in php and mysql?

    - by mysqllearner
    Hi, I am trying to get do this: <?php $good_customer = 0; $q = mysql_query("SELECT user FROM users WHERE activated = '1'"); // this gives me about 40k users while($r = mysql_fetch_assoc($q)){ $money_spent = 0; $user = $r['user']; // Do queries on another 20 tables for($i = 1; $i<=20 ; $i++){ $tbl_name = 'data' . $i; $q2 = mysql_query("SELECT money_spent FROM $tbl_name WHERE user = '{$user}'"); while($r2 = mysql_fetch_assoc($q2)){ $money_spend += $r2['money_spent']; } if($money_spend > 1000000){ $good_customer += 1; } } } This is just an example. I am testing on localhost, for single user, it returns very fast. But when I try 1000, it takes forever, not even mentioned 40k users. Anyway to optimise/improve this code? EDIT: By the way, each of the others 20 tables has ~20 - 40k records

    Read the article

  • How to use Zend_Cache Identifier ?

    - by ArneRie
    Hi Folks, i think iam getting crazy, iam trying to implement Zend_Cache to cache my database query. I know how it works and how to configure. But i cant find a good way to set the Identifier for the cache entrys. I have an method wich search for records in my database (based on an array with search values). /** * Find Record(s) * Returns one record, or array with objects * * @param array $search Search columns => value * @param integer $limit Limit results * @return array One record , or array with objects */ public function find(array $search, $limit = null) { $identifier = 'NoIdea'; if (!($data = $this->_cache->load($identifier))) { // fetch // save to cache with $identifier.. } But what kind of identifier can use in this situation?

    Read the article

  • SQL Server / T-SQL : How to update equal percentages of a resultset?

    - by Kent Comeaux
    I need a way to take a resultset of KeyIDs and divide it up as equally as possible and update records differently for each division based on the KeyIDs. In other words, there is SELECT KeyID FROM TableA WHERE (some criteria exists) I want to update TableA 3 different ways by 3 equal portions of KeyIDs. UPDATE TableA SET FieldA = Value1 WHERE KeyID IN (the first 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value2 WHERE KeyID IN (the second 1/3 of the SELECT resultset above) UPDATE TableA SET FieldA = Value3 WHERE KeyID IN (the third 1/3 of the SELECT resultset above) or something to that effect. Thanks for any and all of your responses.

    Read the article

  • SQL Server Delete - Froregin Key

    - by Ahmet Altun
    I have got two tables in Sql Server 2005: USER Table: information about user and so on. COUNTRY Table : Holds list of whole countries on the world. USER_COUNTRY Table: Which matches, which user has visited which county. It holds, UserID and CountryID. For example, USER_COUNTRY table looks like this: ID -- UserID -- CountryID 1 -- 1 -- 34 2 -- 1 -- 5 3 -- 2 -- 17 4 -- 2 -- 12 5 -- 2 -- 21 6 -- 3 -- 19 My question is that: When a user is deleted in USER table, how can I make associated records in USER_COUNTRY table deleted directly. Maybe, by using Foreign Key Constaint?

    Read the article

  • How to block the possibility to add the same record to a SPList?

    - by truthseeker
    Hi, Is there a possibility to block chance to add the same data to SPList? I know that two records always are different regarding the ID field. I would like to validate other custom fields added previously by me, and don't allow of adding same field's value. Can anybody tell me how to implement this? I can guess that event receivers could be the answer but I couldn't find how to add a receiver to SPList. Can anybody tel me If I'm right and what is step by step procedure to add such event receiver? I would like to know how to build it and install it using Feature file. Best Regards T.S.

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >