Search Results

Search found 14841 results on 594 pages for 'performance monitoring'.

Page 529/594 | < Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >

  • Very interesting problem in Compact Framework

    - by Alexander
    Hi, i have a performance problem while inserting data to sqlce.I'm reading string and making inserts to My tables.In LU_MAM table,i insert 1000 records withing 8 seconds.After Mam tables i make some inserts but my largest table is CR_MUS.When i want to insert record into CR_MUS,it takes too much time.CR_MUS has 2000 records and insert takes 35 seconds.What can be reason?I use same logic in my insert functions.Do u have any idea?I use VS 2008 sp1. Dim reader As StringReader reader = New StringReader(data) cn = New SqlCeConnection(General.ConnString) cn.Open() If myTransfer.ClearTables(cn, cmd) = True Then progress = 0 '------------------------------------------ cmd = New SqlServerCe.SqlCeCommand Dim rs As SqlCeResultSet cmd.Connection = cn cmd.CommandType = CommandType.TableDirect Dim rec As SqlCeUpdatableRecord ' name of table While reader.Peek > -1 If strerr_col = "" Then satir = reader.ReadLine() ayrac = Split(satir, "|") If ayrac(0).ToString() = "LC" Then prgsbar.Maximum = Convert.ToInt32(ayrac(1)) ElseIf ayrac(0).ToString = "PPAR" Then . If ayrac(2).ToString <> General.PMVer Then ShowWaitCursor(False) txtDurum.Text = "Wrong Version" Exit Sub End If If p_POCKET_PARAMETERS = True Then cmd.CommandText = "POCKET_PARAMETERS" txtDurum.Text = "POCKET_PARAMETERS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_POCKET_PARAMETERS = False End If strerr_col = myVERI_AL.POCKET_PARAMETERS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString() = "MAM" Then If p_LU_MAM = True Then txtDurum.Text = "LU_MAM " cmd.CommandText = "LU_MAM" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_LU_MAM = False End If strerr_col = myVERI_AL.LU_MAM_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 ElseIf ayrac(0).ToString = "KMUS" Then If p_CR_MUS = True Then cmd.CommandText = "CR_MUS" txtDurum.Text = "CR_MUS" rs = cmd.ExecuteResultSet(ResultSetOptions.Updatable) rec = rs.CreateRecord() p_TR_KAMPANYA_MALZEME = False End If strerr_col = myVERI_AL.CR_MUS_I(ayrac, cmd, rs, rec) prgsbar.Value += 1 end while Public Function CR_KAMPANYA_MUSTERI_I(ByVal f_Line() As String, ByRef myComm As SqlCeCommand, ByRef rs As SqlCeResultSet, ByRef rec As SqlCeUpdatableRecord) As String Try rec.SetValue(0, If(f_Line(1) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(1))) rec.SetValue(1, If(f_Line(2) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(2))) rec.SetValue(2, If(f_Line(3) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(3))) rec.SetValue(3, If(f_Line(5) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(5))) rec.SetValue(4, If(f_Line(6) = String.Empty, DirectCast(DBNull.Value, Object), f_Line(6))) rs.Insert(rec) Catch ex As Exception strerr_col = ex.Message End Try Return strerr_col End Function

    Read the article

  • SQL to get list of dates as well as days before and after without duplicates

    - by Nathan Koop
    I need to display a list of dates, which I have in a table SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 10, 2010 - 1 No problem. However, I now need to display the date before and the date after as well with a different DateType. Dec 31, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 I thought I could use a union SELECT MyDate, DateType FROM ( SELECT mydate - 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate + 1 AS MyDate, 2 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; UNION SELECT mydate AS MyDate, 1 AS DateType FROM myTable WHERE myTable.fkId = @MyFkId; ) AS myCombinedDateTable This however includes duplicates of the original dates. Dec 31, 2009 - 2 Jan 1, 2009 - 2 Jan 1, 2010 - 1 Jan 2, 2010 - 2 Jan 2, 2010 - 1 Jan 3, 2010 - 2 Jan 9, 2010 - 2 Jan 10, 2010 - 1 Jan 11, 2010 - 2 How can I best remove these duplicates? I am considering a temporary table, but am unsure if that is the best way to do it. This also appears to me that it may provide performance issues as I am running the same query three separate times. What would be the best way to handle this request?

    Read the article

  • 301 Redirecting URLs based on GET variables in .htaccess

    - by technicalbloke
    I have a few messy old URLs like... http://www.example.com/bunch.of/unneeded/crap?opendocument&part=1 http://www.example.com/bunch.of/unneeded/crap?opendocument&part=2 ...that I want to redirect to the newer, cleaner form... http://www.example.com/page.php/welcome http://www.example.com/page.php/prices I understand I can redirect one page to another with a simple redirect i.e. Redirect 301 /bunch.of/unneeded/crap http://www.example.com/page.php But the source page doesn't change, only it's GET vars. I can't figure out how to base the redirect on the value of these GET variables. Can anybody help pls!? I'm fairly handy with the old regexes so I can have a pop at using mod-rewrite if I have to but I'm not clear on the syntax for rewriting GET vars and I'd prefer to avoid the performance hit and use the cleaner Redirect directive. Is there a way? and if not can anyone clue me in as to the right mod-rewrite syntax pls? Cheers, Roger.

    Read the article

  • Asp.Net tree view in SharePoint webpart- Input string error

    - by Faiz
    Hi All, I am facing a very strange issue. I have a SharePoint webpart that displays an asp.net tree view. It takes tree depth from a drop down. To improve performance of the tree view, i am setting the PopulateOnDemand property to true for the last level of the tree depth. For example, if i have a total of 10 levels in the data and the user selects tree depth as 3, then the third level data i set PopulateOnDemand to true. Now comes the strange part. When i click on the + image on the third level, and if there are children under that particular node then call back happens and node gets expanded. But if there no children for that particular node, then click + throws "Input string was not in the correct format" error. I have made sure that there is no server side error. Some things looks to be fishy when internet explorer is trying to bind construct the expanded node. Please let me know if any one faced similar issue or the resolution for the same? Thanks in advance

    Read the article

  • Commit is VERY slow in my NHibernate / SQLite project

    - by Tom Bushell
    I've just started doing some real-world performance testing on my Fluent NHibernate / SQLite project, and am experiencing some serious delays when when I Commit to the database. By serious, I mean taking 20 - 30 seconds to Commit 30 K of data! This delay seems to get worse as the database grows. When the SQLite DB file is empty, commits happen almost instantly, but when it grows to 10 Meg, I see these huge delays. The database has 16 tables, averaging 10 columns each. One possible problem is that I'm storing a dozen or so IList members, but they are typically only 200 elements long. But this is a recent addition to Fluent NHibernate automapping, which stores each float in a single table row, so maybe that's a potential problem. Any suggestions on how to track this down? I suspect SQLite is the culprit, but maybe it's NHibernate? I don't have any experience with profilers, but am thinking of getting one. I'm aware of NHibernate Profiler - any recommendations for profilers that work well with SQLite? Here's the method that saves the data - it's just a SaveOrUpdate call and a Commit, if you ignore all the error handling and debug logging. public static void SaveMeasurement(object measurement) { Debug.WriteLine("\r\n---SaveMeasurement---"); // Get the application's database session var session = GetSession(); using (var transaction = session.BeginTransaction()) { try { session.SaveOrUpdate(measurement); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->SaveOrUpdate failed\r\n\r\n", e); } try { Debug.WriteLine("\r\n---Commit---"); transaction.Commit(); Debug.WriteLine("\r\n---Commit Complete---"); } catch (Exception e) { throw new ApplicationException( "\r\n SaveMeasurement->Commit failed\r\n\r\n", e); } } }

    Read the article

  • Question about creating device-compatible bitmaps in C#

    - by MusiGenesis
    I am storing bitmap-like data in a two-dimensional int array. To convert this array into a GDI-compatible bitmap (for use with BitBlt), I am using this function: public IntPtr GetGDIBitmap(int[,] data) { int w = data.GetLength(0); int h = data.GetLength(1); IntPtr ret = IntPtr.Zero; using (Bitmap bmp = new Bitmap(w, h)) { for (int x = 0; x < w; x++) { for (int y = 0; y < h; y++) { Color color = Color.FromArgb(data[x, y]); bmp.SetPixel(x, y, color); } } ret = bmp.GetHbitmap(); } return ret; } This works as expected, but the call to bmp.GetHbitmap() has to allocate memory for the returned bitmap. I'd like to modify this method in two (probably related) ways: I'd like to remove the intermediate Bitmap from the above code entirely, and go directly from my int[,] array to the device-compatible bitmap (i.e. the IntPtr). I presume this would involve calling CreateCompatibleBitmap, but I don't know how to go from that call to actually manipulating the pixel values. This should logically follow from the answer to the first, but I'd also like my method to re-use existing GDI bitmap handles (instead of creating a new bitmap each time). How can I do this? NOTE: I don't really use Bitmap.SetPixel(), as its performance could best be described as "glacial". The code is just for illustration.

    Read the article

  • Using variables inside macros in SQL

    - by Tim
    Hello I'm wanting to use variables inside my macro SQL on Teradata. I thought I could do something like the following: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ DECLARE V_LAST_RUN_DATE TIMESTAMP; /* Get last run date and store in V_LAST_RUN_DATE */ SELECT LastDate INTO V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,V_LAST_RUN_DATE ,CURRENT_TIMESTAMP ); ); However, that didn't work, so I thought of this instead: REPLACE MACRO DbName.MyMacro ( MacroNm VARCHAR(50) ) AS ( /* Variable to store last time the macro was run */ CREATE VOLATILE TABLE MacroVars AS ( SELECT LastDate AS V_LAST_RUN_DATE FROM DbName.RunLog WHERE MacroNm = :MacroNm; ) WITH DATA ON COMMIT PRESERVE ROWS; /* Update the last run date to now and save the old date in history */ EXECUTE MACRO DbName.RunLogUpdater( :MacroNm ,SELECT V_LAST_RUN_DATE FROM MacroVars ,CURRENT_TIMESTAMP ); ); I can do what I'm looking for with a Stored Procedure, however I want to avoid for performance. Do you have any ideas about this? Is there anything else I can try? Cheers Tim

    Read the article

  • PHP autoloader: ignoring non-existing include

    - by Bart van Heukelom
    I have a problem with my autoloader: public function loadClass($className) { $file = str_replace(array('_', '\\'), '/', $className) . '.php'; include_once $file; } As you can see, it's quite simple. I just deduce the filename of the class and try to include it. I have a problem though; I get an exception when trying to load a non-existing class (because I have an error handler which throws exceptions). This is inconvenient, because it's also fired when you use class_exists() on a non-existing class. You don't want an exception there, just a "false" returned. I fixed this earlier by putting an @ before the include (supressing all errors). The big drawback with this, though, is that any parser/compiler errors (that are fatal) in this include won't show up (not even in the logs), resulting in a hard to find bug. What would be the best way to solve both problems at once? The easiest way would be to include something like this in the autoloader (pseudocode): foreach (path in the include_path) { if (is_readable(the path + the class name)) readable = true; } if (!readable) return; But I worry about the performance there. Would it hurt a lot? (Solved) Made it like this: public function loadClass($className) { $file = str_replace(array('_', '\\'), '/', $className) . '.php'; $paths = explode(PATH_SEPARATOR, get_include_path()); foreach ($paths as $path) { if (is_readable($path . '/' . $file)) { include_once $file; return; } } }

    Read the article

  • Is it safe to reuse javax.xml.ws.Service objects

    - by Noel Ang
    I have JAX-WS style web service client that was auto-generated with the NetBeans IDE. The generated proxy factory (extends javax.xml.ws.Service) delegates proxy creation to the various Service.getPort methods. The application that I am maintaining instantiates the factory and obtains a proxy each time it calls the targetted service. Creating the new proxy factory instances repeatedly has been shown to be expensive, given that the WSDL documentation supplied to the factory constructor, an HTTP URI, is re-retrieved for each instantiation. We had success in improving the performance by caching the WSDL. But this has ugly maintenance and packaging implications for us. I would like to explore the suitability of caching the proxy factory itself. Is it safe, e.g., can two different client classes, executing on the same JVM and targetting the same web service, safely use the same factory to obtain distinct proxy objects (or a shared, reentrant one)? I've been unable to find guidance from either the JAX-WS specification nor the javax.xml.ws API documentation. The factory-proxy multiplicity is unclear to me. Having Service.getPort rather than Service.createPort does not inspire confidence.

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • Why use bin2hex when inserting binary data from PHP into MySQL?

    - by Atli
    I heard a rumor that when inserting binary data (files and such) into MySQL, you should use the bin2hex() function and send it as a HEX-coded value, rather than just use mysql_real_escape_string on the binary string and use that. // That you should do $hex = bin2hex($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES (X'{$hex}')"; // Rather than $bin = mysql_real_escape_string($raw_bin); $sql = "INSERT INTO `table`(`file`) VALUES ('{$bin}')"; It is supposedly for performance reasons. Something to do with how MySQL handles large strings vs. how it handles HEX-coded values However, I am having a hard time confirming this. All my tests indicate the exact oposite; that the bin2hex method is ~85% slower and uses ~24% more memory. (I am testing this on PHP 5.3, MySQL 5.1, Win7 x64 - Using a farily simple insert loop.) For instance, this graph shows the private memory usage of the mysqld process while the test code was running: Does anybody have any explainations or reasources that would clarify this? Thanks.

    Read the article

  • Caching page by parts; how to pass variables calculated in cached parts into never-cached parts?

    - by Kirzilla
    Hello, Let's imagine that I have a code like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); ob_start(); echo 'Here is the cached item id: '.$item_id; $data = ob_get_contents(); ob_end_clean(); $cache->save($data, "part1_cache_id"); } echo $data; echo never_cache_function($item_id); if (!$data_2 = $cache->load("part2_cache_id")) { ob_start(); echo 'Here is the another cached part of the page...'; $data_2 = ob_get_contents(); ob_end_clean(); $cache->save("part2_cache_id"); } echo $data_2; As far as you can see I need to pass $item_id variable into never_cache_function, but if fist part is cached (part1_cache_id) then I have no way to get $item_id value. I see the only solution - serialize all data from fist part (including $item_id value); then cache serialized string and unserialize it everytime when script is executed... Something like this... if (!$data = $cache->load("part1_cache_id")) { $item_id = $model->getItemId(); $data['item_id'] = $item_id; ob_start(); echo 'Here is the cached item id: '.$item_id; $data['html'] = ob_get_contents(); ob_end_clean(); $cache->save( serialize($data), "part1_cache_id" ); } $data = unserialize($data); echo $data['html'] echo never_cache_function($data['item_id']); Is there any other ways for doing such trick? I'm looking for the most high performance solution. Thank you UPDATED And another question is - how to implement such caching into controller without separating page into two templates? Is it possible? PS: Please, do not suggest Smarty, I'm really interested in implementing custom caching.

    Read the article

  • Finding a picture in a picture with java?

    - by tarrasch
    what i want to to is analyse input from screen in form of pictures. I want to be able to identify a part of an image in a bigger image and get its coordinates within the bigger picture. Example: would have to be located in And the result would be the upper right corner of the picture in the big picture and the lower left of the part in the big picture. As you can see, the white part of the picture is irrelevant, what i basically need is just the green frame. Is there a library that can do something like this for me? Runtime is not really an issue. What i want to do with this is just generating a few random pixel coordinates and recognize the color in the big picture at that position, to recognize the green box fast later. And how would it decrease performance, if the white box in the middle is transparent? The question has been asked several times on SO as it seems without a single answer. I found i found a solution at http://werner.yellowcouch.org/Papers/subimg/index.html . Unfortunately its in C++ and i do not understand a thing. Would be nice to have a Java implementation on SO.

    Read the article

  • Model login constraints based on time

    - by DaDaDom
    Good morning, for an existing web application I need to implement "time based login constraints". It means that for each user, later maybe each group, I can define timeslots when they are (not) allowed to log in into the system. As all data for the application is stored in database tables, I need to somehow create a way to model this idea in that way. My first approach, I will try to explain it here: Create a tree of login constraints (called "timeslots") with the main "categories", like "workday", "weekend", "public holiday", etc. on the top level, which are in a "sorted" order (meaning "public holiday" has a higher priority than "weekday") for each top level node create subnodes, which have a finer timespan, like "monday", "tuesday", ... below that, create an "hour" level: 0, 1, 2, ..., 23. No further details are necessary. set every member to "allowed" by default For every member of the system create a 1:n relationship member:timeslots which defines constraints, e.g. a member A may have A:monday-forbidden and A:tuesday-forbidden Do a depth-first search at every login and check if the member has a constraint. Why a depth first search? Well, I thought that it may be that a member has the rules: A:monday->forbidden, A:monday-10->allowed, A:mondey-11->allowed So a login on monday at 12:30 would fail, but one at 10:30 succeed. For performance reasons I could break the relational database paradigm and set a flag for every entry in the member-to-timeslots-table which is set to true if the member has information set for "finer" timeslots, but that's a second step. Is this model in principle a good idea? Are there existing models? Thanks.

    Read the article

  • [Java] Nested methods vs "piped" methods, which is better?

    - by Michael Mao
    Hi: Since uni, I've programming in Java for 3 years, although I am not fully dedicated to this language, I have spent quite some time in it, nevertheless. I understand both ways, just curious which style do you prefer. public class Test{ public static void main(String[] args) { System.out.println(getAgent().getAgentName()); } private static Agent getAgent() { return new Agent(); }} class Agent{ private String getAgentName() { return "John Smith"; }} I am pretty happy with nested method calls such like the following public class Test{ public static void main(String[] args) { getAgentName(getAgent()); } private static void getAgentName(Agent agent) { System.out.println(agent.getName()); } private static Agent getAgent() { return new Agent(); }} class Agent { public String getName(){ return "John Smith"; }} They have identical output I saw "John Smith" twice. I wonder, if one way of doing this has better performance or other advantages over the other. Personally I prefer the latter, since for nested methods I can certainly tell which starts first, and which is after. The above code is but a sample, The code that I am working with now is much more complicated, a bit like a maze... So switching between the two styles often blows my head in no time.

    Read the article

  • how do i work bool!

    - by user350217
    HI! im a beginner and im really getting frustrated with this whole concept. i dont understand how to exactly use the bool function. i know what it is but i cantfigure out how to make it work in my program. Im trying to say that if my variable "selection" is any letter beween 'A' and 'I' then it is valid and can continue on to the next function which is called calcExchangeAmt(amtExchanged, selection). if it is false i want it to ask the user if they want to repeat the program and if they agree to repeat i want it to clear the screen and restart to the main function. please help edit my work! this is my bool function: bool isSelectionValid(char selection, char yesNo, double amtExchanged) { bool validData; validData = true; if((selection >= 'a' && selection <= 'i') || (selection >= 'A' && selection <= 'I')) { validData = calcExchangeAmt (amtExchanged, selection); } else(validData == false); { cout<<"Do you wish to continue? (Y for Yes / N for No)"; cin>>yesNo; } do { main(); } while ((yesNo =='y')||(yesNo == 'Y')); { system("cls"); } return 0; } this is the warning that im gettng: warning C4800: 'double' : forcing value to bool 'true' or 'false' (performance warning)

    Read the article

  • Stack usage with MMX intrinsics and Microsoft C++

    - by arik-funke
    I have an inline assembler loop that cumulatively adds elements from an int32 data array with MMX instructions. In particular, it uses the fact that the MMX registers can accommodate 16 int32s to calculate 16 different cumulative sums in parallel. I would now like to convert this piece of code to MMX intrinsics but I am afraid that I will suffer a performance penalty because one cannot explicitly intruct the compiler to use the 8 MMX registers to accomulate 16 independent sums. Can anybody comment on this and maybe propose a solution on how to convert the piece of code below to use intrinsics? == inline assembler (only part within the loop) == paddd mm0, [esi+edx+8*0] ; add first & second pair of int32 elements paddd mm1, [esi+edx+8*1] ; add third & fourth pair of int32 elements ... paddd mm2, [esi+edx+8*2] paddd mm3, [esi+edx+8*3] paddd mm4, [esi+edx+8*4] paddd mm5, [esi+edx+8*5] paddd mm6, [esi+edx+8*6] paddd mm7, [esi+edx+8*7] ; add 15th & 16th pair of int32 elements esi points to the beginning of the data array edx provides the offset in the data array for the current loop iteration the data array is arranged such that the elements for the 16 independent sums are interleaved.

    Read the article

  • Count rows against to SQL server (2005) table?

    - by David.Chu.ca
    I have a simple question with two options to get count of rows in a SQL server (2005). I am using VS 2005. There are two options to get the count: SELECT id FROM Table1 WHERE dt >= startDt AND dt < endDt;; I get a list of ids from above call in cache and then I get count by List.Count. Another option is SELECT COUNT(*) FROM Table1 WHERE dt >= startDt AND dt < endDt; The above call will get the count directly. The issue is that I had several cases of exceptions with the second method: timeout. What I found is that the table1 is too big with millions of data. When I used the first option, it seems OK. I am confused by the fact that Count() takes more time than getting all the rows(is that true?). Not sure if the aggregation call with Count() would cause SQL server to create temporary table or cache on server side and it would result in slow performance when table is too big? I am not sure what is the best way to get the count?

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

  • is mysql index useful on column 'state' when only doing bit-operations on the column?

    - by Geert-Jan
    I have a lot of domain entities (stored in mysql) which undergo lots of different operations. Each operation is executed from a different program. I need to keep (flow)-state for these entities which I implemented in as a long field 'flowstate' used as a bitset. to query mysql for entities which have undergone a certain operation I do something like: select * from entities where state >> 7 & 1 = 1 Indicating bit 7 (cooresponding to operation 7) has run. (<-- simplified) Anyway, I really didn't pay attention to the performance implications of this setup in the beginning, and I think I'm in a bit of trouble since queries as the above run pretty slow. What I'd like to know: Does an mysql index on 'flowstate' help at all? After all it's not a single value Mysql can quickly find using a binary sort or whatever. If it doesn't, are there any other things I could do to speed things up? . Are there special 'mask-indices' for fields with use-cases as the above? TIA, Geert-jan

    Read the article

  • Rails : fighting long http response times with ajax. Is it a good idea? Please, help with implementa

    - by baranov
    Hi, everybody! I've googled some tutorials, browsed some SO answers, and was unable to find a recipe for my problem. I'm writing a web site which is supposed to display almost realtime stock chart. Data is stored in constantly updating MySQL database, I wrote a find_by_sql query code which fetches all the data I need to get my chart drawn. Everything is ok, except performance - it takes from one second to one minute for different queries to fetch all the data from the database, this time includes necessary (My)SQL-server side calculations. This is simply unacceptable. I got the following idea: if the data is queried from the MySQL server one point a time instead of entire dataset, it takes only about 1-100ms to get an individual point. I imagine the data fetch process might be browser-driven. After the user presses the button in order to get a chart drawn, controller makes one request to the database and renders, say, a progress bar, say 1% ready. When the browser gets the response, it immediately makes an (ajax) request, and the server fetches the next piece of data and renders "2%". And so on, until all the data is ready and the server displays the requested chart. Could this be implemented in rails+js, is there a tutorial for solving a similar problem on the Web? I suppose if the thing is feasible at all, somebody should have already done this before. I have read several articles about ajax, I believe I do understand general principles, but never did nontrivial ajax programming myself. Thanks for your time!

    Read the article

  • How to react when asked a question you already know during an interview

    - by DevNull
    The short story:- If you are asked a tough algorithmic/puzzle question during an interview, whose solution is already known to you, do you:- Honestly tell the interviewer that you know this question already? -- this could result in bursting the interviewer's ego and him increasing the complexity level of the subsequent questions. Do an Oscar deserving performance and act as if you are thinking and trying hard and slowly getting to the solution? -- depending on your acting skills, could majorly impress the interviewer making the rest of the interview easier. Long story:- OK, this question comes as a result of what happened to me in a recent telephonic interview that I gave - the interview was supposed to be all algorithmic. The interviewer started with an algorithmic question which I had luckily already seen here on Stackoverflow. The best solution to that problem is not very intuitive and is more of a you-get-it-if-you-know-it kind. Now, just to not disappoint the interviewer too much, I took a few seconds as if I was pondering on the problem and then blurted out the answer which I knew too well having read and admired it on SO already. But I guess that gave it away to the interviewer that I already knew this question and since then, he started asking me for more efficient solutions and I kept coming up with approaches (even if not correct or more efficient, but I did touch a lot of different data structures and algos) and he kept asking for more efficient solutions and generally seemed put off by my initial salvo which was unexpected. What should I have done? Cheers!

    Read the article

  • How to lazy load scripts in YUI that accompany ajax html fragments

    - by Chris Beck
    I have a web app with Tabs for Messages and Contacts (think gmail). Each Tab has a Y.on('click') event listener that retrieves an HTML fragment via Y.io() and renders the fragment via node.setContent(). However, the Contact Tab also requires contact.js to enable an Edit button in the fragment. How do I defer the cost of loading contact.js until the user clicks on the Contacts tab? How should contact.js add it's listener to the Edit button? The Complete function of my Tab's on('click') event could serialize Get.script('contact.js') after Y.io('fragment'). However, for better performance, I would prefer to download the script in parallel to downloading the HTML fragment. In that case, I would need to defer adding an event listener to the Edit button until the Edit button is available. This seems like a common RIA design pattern. What is the best way to implement this with YUI? How should I get the script? How should I defer sections of the script until elements in the fragment are available in the DOM?

    Read the article

  • Custom UIView using CALayers disappears after 180º rotation or navigation controller pop

    - by Steve Madsen
    I have a created a custom UIView subclass that is exhibiting some strange behavior. It is a spinning wheel selector, and for performance reasons it is drawn entirely into two CALayer instances. The bottom layer is the wheel itself, which is rotated using setAffineTransform: according to touches. The top layer is eye candy. drawRect: is fairly simple. If the control hasn't been drawn yet (or it's been invalidated), it calls a method that creates the images and assigns them to the layer contents property. - (void) drawRect:(CGRect)rect { if (imageLayer == nil) { [self drawIntoImageLayer]; } [self updateWheelRotation]; } When the view controller using this view first appears, everything is fine. There are two instances where the view completely disappears, however: If the device is rotated a full 180°. After a view controller is popped off the navigation stack and the view becomes visible again. drawRect: is not called either time. Interestingly enough, it IS called after a 90° orientation change, and that causes the view to re-appear. How can I ensure that a custom view using CALayers is redrawn properly in these situations?

    Read the article

  • Python socket error on UDP data receive. (10054)

    - by Charles
    I currently have a problem using UDP and Python socket module. We have a server and clients. The problem occurs when we send data to a user. It's possible that user may have closed their connection to the server through a client crash, disconnect by ISP, or some other improper method. As such, it is possible to send data to a closed socket. Of course with UDP you can't tell if the data really reached or if it's closed, as it doesn't care (atleast, it doesn't bring up an exception). However, if you send data and it is closed off, you get data back somehow (???), which ends up giving you a socket error on sock.recvfrom. [Errno 10054] An existing connection was forcibly closed by the remote host. Almost seems like an automatic response from the connection. Although this is fine, and can be handled by a try: except: block (even if it lowers performance of the server a little bit). The problem is, I can't tell who this is coming from or what socket is closed. Is there anyway to find out 'who' (ip, socket #) sent this? It would be great as I could instantly just disconnect them and remove them from the data. Any suggestions? Thanks.

    Read the article

< Previous Page | 525 526 527 528 529 530 531 532 533 534 535 536  | Next Page >