Search Results

Search found 20904 results on 837 pages for 'disk performance'.

Page 757/837 | < Previous Page | 753 754 755 756 757 758 759 760 761 762 763 764  | Next Page >

  • How to insert zeros between bits in a bitmap?

    - by anatolyg
    I have some performance-heavy code that performs bit manipulations. It can be reduced to the following well-defined problem: Given a 13-bit bitmap, construct a 26-bit bitmap that contains the original bits spaced at even positions. To illustrate: 0000000000000000000abcdefghijklm (input, 32 bits) 0000000a0b0c0d0e0f0g0h0i0j0k0l0m (output, 32 bits) I currently have it implemented in the following way in C: if (input & (1 << 12)) output |= 1 << 24; if (input & (1 << 11)) output |= 1 << 22; if (input & (1 << 10)) output |= 1 << 20; ... My compiler (MS Visual Studio) turned this into the following: test eax,1000h jne 0064F5EC or edx,1000000h ... (repeated 13 times with minor differences in constants) I wonder whether i can make it any faster. I would like to have my code written in C, but switching to assembly language is possible. Can i use some MMX/SSE instructions to process all bits at once? Maybe i can use multiplication? (multiply by 0x11111111 or some other magical constant) Would it be better to use condition-set instruction (SETcc) instead of conditional-jump instruction? If yes, how can i make the compiler produce such code for me? Any other idea how to make it faster? Any idea how to do the inverse bitmap transformation (i have to implement it too, bit it's less critical)?

    Read the article

  • problem in working with thread

    - by Xaver
    Ihave the tree view in which i have file system of logical disk. When user select some files and folders and press button programm evaluate the size of selected files and folders. this function may takes a long time. i decide do thread which will run this function. This function works with array of TreeNode. but then i want to now was it node expaned or not compiler say: "attempt to access control "treeview1" not from the thread, in which it was created." Why it appeared? Next code is show how i create array of Nodes which i send to new thread: void frmMain::FillSelected(TreeNode^ a, array<TreeNode^>^ *Paths) { if (a->Parent == nullptr) { for(int j = 0;j < a->Nodes->Count;j++) { if ((a->Nodes[j]->ImageIndex == 1)&&(a->Nodes[j]->Checked==true)) { (*Paths)->Resize((*Paths), (*Paths)->Length + 1); (*Paths)[(*Paths)->Length-1] = a->Nodes[j]; } } } for(int i = 0;i < a->Nodes->Count;i++) { if (a->Parent == nullptr) { FillSelected(a->Nodes[i], Paths); } else { if(a->Nodes[i]->Checked == true) { (*Paths)->Resize((*Paths), (*Paths)->Length + 1); (*Paths)[(*Paths)->Length-1] = a->Nodes[i]; } if ((a->Nodes[i]->Nodes->Count > 0)&&(a->Nodes[i]->Nodes[0]->FullPath != (a->Nodes[i]->FullPath + "\\"))) { FillSelected(a->Nodes[i], Paths); } } } return; }

    Read the article

  • Converting ntext to nvcharmax(max) - Getting around size limitation

    - by Overflew
    Hi all, I'm trying to change an existing SQL NText column to nvcharmax(max), and encountering an error on the size limit. There's a large amount of existing data, some of which is more than the 8k limit, I believe. We're looking to convert this, so that the field is searchable in LINQ. The 2x SQL statements I've tried are: update Table set dataNVarChar = convert(nvarchar(max), dataNtext) where dataNtext is not null update Table set dataNVarChar = cast(dataNtext as nvarchar(max)) where dataNtext is not null And the error I get is: Cannot create a row of size 8086 which is greater than the allowable maximum row size of 8060. This is using SQL Server 2008. Any help appreciated, Thanks. Update / Solution: The marked answer below is correct, and SQL 2008 can change the column to the correct data type in my situation, and there are no dramas with the LINQ-utilising application we use on top of it: alter table [TBL] alter column [COL] nvarchar(max) I've also been advised to follow it up with: update [TBL] set [COL] = [COL] Which completes the conversion by moving the data from the lob structure to the table (if the length in less than 8k), which improves performance / keeps things proper.

    Read the article

  • Unable to get data from a WCF client

    - by Scott
    I am developing a DLL that will provide sychronized time stamps to multiple applications running on the same machine. The timestamps are altered in a thread that uses a high performance timer and a scalar to provide the appearance of moving faster than real-time. For obvious reasons I want only 1 instance of this time library, and I thought I could use WCF for the other processes to connect to this and poll for timestamps whenever they want. When I connect however I never get a valid time stamp, just an empty DateTime. I should point out that the library does work. The original implementation was a single DLL that each application incorporated and each one was synced using windows messages. I'm fairly sure it has something to do with how I'm setting up the WCF stuff, to which I am still pretty new. Here are the contract definitions: public interface ITimerCallbacks { [OperationContract(IsOneWay = true)] void TimerElapsed(String id); } [ServiceContract(SessionMode = SessionMode.Required, CallbackContract = typeof(ITimerCallbacks))] public interface ISimTime { [OperationContract] DateTime GetTime(); } Here is my class definition: [ServiceBehavior(InstanceContextMode = InstanceContextMode.Single)] public class SimTimeServer: ISimTime The host setup: // set up WCF interprocess comms host = new ServiceHost(typeof(SimTimeServer), new Uri[] { new Uri("net.pipe://localhost") }); host.AddServiceEndpoint(typeof(ISimTime), new NetNamedPipeBinding(), "SimTime"); host.Open(); and the implementation of the interface function server-side: public DateTime GetTime() { if (ThreadMutex.WaitOne(20)) { RetTime = CurrentTime; ThreadMutex.ReleaseMutex(); } return RetTime; } Lastly the client-side implementation: Callbacks myCallbacks = new Callbacks(); DuplexChannelFactory pipeFactory = new DuplexChannelFactory(myCallbacks, new NetNamedPipeBinding(), new EndpointAddress("net.pipe://localhost/SimTime")); ISimTime pipeProxy = pipeFactory.CreateChannel(); while (true) { string str = Console.ReadLine(); if (str.ToLower().Contains("get")) Console.WriteLine(pipeProxy.GetTime().ToString()); else if (str.ToLower().Contains("exit")) break; }

    Read the article

  • how to clear XFixes regions

    - by ~buratinas
    Hi, I'm writing some low level code for X11 platform. To achieve best data copying performance I use XFixes/XDamage extensions. How can I clear the contents of XFixes region after one refresh cycle? Or do they clean themselves after I use XFixesSetPictureClipRegion? My code is something like that: Display xdpy; XShamPixmap pixmap_; XFixesRegion region_; damage_event_callback(damage_geometry_t geometry, XDamage damage,...) { unsigned curr_region = XFixesCreateRegion(xdpy, 0, 0); XDamageSubtract(xdpy, damage, None, curr_region); XFixesTranslateRegion( xdpy, curr_region, geometry.left(), geometry.top() ); XFixesUnionRegion (xdpy, region_, region_, curr_region); } process_damage_events(...) { XFixesSetPictureClipRegion( xdpy, pixmap_, 0, 0, region_); XCopyArea (xdpy, window_->id(), pixmap_, XDefaultGC(xdpy, XDefaultScreen(xdpy)), 0,0,width(),height(),0,0); /*Should clear region_ here */ ... } Currently I clear the region by deleting and recreating, but I guess it's not the best way to do that.

    Read the article

  • Python optimization problem?

    - by user342079
    Alright, i had this homework recently (don't worry, i've already done it, but in c++) but I got curious how i could do it in python. The problem is about 2 light sources that emit light. I won't get into details tho. Here's the code (that I've managed to optimize a bit in the latter part): import math, array import numpy as np from PIL import Image size = (800,800) width, height = size s1x = width * 1./8 s1y = height * 1./8 s2x = width * 7./8 s2y = height * 7./8 r,g,b = (255,255,255) arr = np.zeros((width,height,3)) hy = math.hypot print 'computing distances (%s by %s)'%size, for i in xrange(width): if i%(width/10)==0: print i, if i%20==0: print '.', for j in xrange(height): d1 = hy(i-s1x,j-s1y) d2 = hy(i-s2x,j-s2y) arr[i][j] = abs(d1-d2) print '' arr2 = np.zeros((width,height,3),dtype="uint8") for ld in [200,116,100,84,68,52,36,20,8,4,2]: print 'now computing image for ld = '+str(ld) arr2 *= 0 arr2 += abs(arr%ld-ld/2)*(r,g,b)/(ld/2) print 'saving image...' ar2img = Image.fromarray(arr2) ar2img.save('ld'+str(ld).rjust(4,'0')+'.png') print 'saved as ld'+str(ld).rjust(4,'0')+'.png' I have managed to optimize most of it, but there's still a huge performance gap in the part with the 2 for-s, and I can't seem to think of a way to bypass that using common array operations... I'm open to suggestions :D

    Read the article

  • i have problem with sharing a folder through programming using c#?

    - by moon
    here is my code it shares the folder but that does not work correctly when i want to access it , it shows access denied help required, private static void ShareFolder(string FolderPath, string ShareName, string Description) { try { // Create a ManagementClass object ManagementClass managementClass = new ManagementClass("Win32_Share"); // Create ManagementBaseObjects for in and out parameters ManagementBaseObject inParams = managementClass.GetMethodParameters("Create"); ManagementBaseObject outParams; // Set the input parameters inParams["Description"] = Description; inParams["Name"] = ShareName; inParams["Path"] = FolderPath; inParams["Type"] = 0x0; // Disk Drive //Another Type: //DISK_DRIVE = 0x0; //PRINT_QUEUE = 0x1; //DEVICE = 0x2; //IPC = 0x3; //DISK_DRIVE_ADMIN = 0x80000000; //PRINT_QUEUE_ADMIN = 0x80000001; //DEVICE_ADMIN = 0x80000002; //IPC_ADMIN = 0x8000003; //inParams["MaximumAllowed"] = int maxConnectionsNum; // Invoke the method on the ManagementClass object outParams = managementClass.InvokeMethod("Create", inParams, null); // Check to see if the method invocation was successful if ((uint)(outParams.Properties["ReturnValue"].Value) != 0) { throw new Exception("Unable to share directory. Because Directory is already shared or directory not exist"); }//end if }//end try catch (Exception ex) { MessageBox.Show(ex.Message, "error!"); }//end catch }//End Method

    Read the article

  • Given a typical Rails 3 environment, why am I unable to execute any tests?

    - by Tom
    I'm working on writing simple unit tests for a Rails 3 project, but I'm unable to actually execute any tests. Case in point, attempting to run the test auto-generated by Rails fails: require 'test_helper' class UserTest < ActiveSupport::TestCase # Replace this with your real tests. test "the truth" do assert true end end Results in the following error: <internal:lib/rubygems/custom_require>:29:in `require': no such file to load -- test_helper (LoadError) from <internal:lib/rubygems/custom_require>:29:in `require' from user_test.rb:1:in `<main>' Commenting out the require 'test_helper' line and attempting to run the test results in this error: user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) The action pack gems appear to be properly installed and up to date: actionmailer (3.0.3, 2.3.5) actionpack (3.0.3, 2.3.5) activemodel (3.0.3) activerecord (3.0.3, 2.3.5) activeresource (3.0.3, 2.3.5) activesupport (3.0.3, 2.3.5) Ruby is at 1.9.2p0 and Rails is at 3.0.3. The sample dump of my test directory is as follows: /fixtures /functional /integration /performance /unit -- /helpers -- user_helper_test.rb -- user_test.rb test_helper.rb I've never seen this problem before - I've run the typical rake tasks for preparing the test environment. I have nothing out of the ordinary in my application or environment configuration files, nor have I installed any unusual gems that would interfere with the test environment. Edit Xavier Holt's suggestion, explicitly specifying the path to the test_helper worked; however, this revealed an issue with ActiveSupport. Now when I attempt to run the test, I receive the following error message (as also listed above): user_test.rb:3:in `<main>': uninitialized constant Object::ActiveSupport (NameError) But as you can see above, Action Pack is all installed and update to date.

    Read the article

  • How would I go about sharing variables in a C++ class with Lua?

    - by Nicholas Flynt
    I'm fairly new to Lua, I've been working on trying to implement Lua scripting for logic in a Game Engine I'm putting together. I've had no trouble so far getting Lua up and running through the engine, and I'm able to call Lua functions from C and C functions from Lua. The way the engine works now, each Object class contains a set of variables that the engine can quickly iterate over to draw or process for physics. While game objects all need to access and manipulate these variables in order for the Game Engine itself to see any changes, they are free to create their own variables, a Lua is exceedingly flexible about this so I don't forsee any issues. Anyway, currently the Game Engine side of things are sitting in C land, and I really want them to stay there for performance reasons. So in an ideal world, when spawning a new game object, I'd need to be able to give Lua read/write access to this standard set of variables as part of the Lua object's base class, which its game logic could then proceed to run wild with. So far, I'm keeping two separate tables of objects in place-- Lua spawns a new game object which adds itself to a numerically indexed global table of objects, and then proceeds to call a C++ function, which creates a new GameObject class and registers the Lua index (an int) with the class. So far so good, C++ functions can now see the Lua object and easily perform operations or call functions in Lua land using dostring. What I need to do now is take the C++ variables, part of the GameObject class, and expose them to Lua, and this is where google is failing me. I've encountered a very nice method here which details the process using tags, but I've read that this method is deprecated in favor of metatables. What is the ideal way to accomplish this? Is it worth the hassle of learning how to pass class definitions around using libBind or some equivalent method, or is there a simple way I can just register each variable (once, at spawn time) with the global lua object? What's the "current" best way to do this, as of Lua 5.1.4?

    Read the article

  • HTML5 audio object doesn't play on iPad (when called from a setTimeout)

    - by Dan Halliday
    I have a page with a hidden <audio> object which is being started and stopped using a custom button via javascript. (The reason being I want to customise the button, and that drawing an audio player seems to destroy rendering performance on iPad anyway). A simplified example (in coffeescript): // Works fine on all browsers constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => @_audio[0].play() // Call play() on audio element The audio plays fine when triggered from a function bound to a click event, but I actually want an animation to complete before the file plays so I put .play() inside a setTimeout. However I just can't get this to work: // Will not play on iPad constructor: (@_button, @_audio) -> @_button.on 'click', @_play // Bind button's click event with jQuery _play: (e) => setTimeout (=> // Declare a 300ms timeout @_audio[0].play() // Call play() on audio element ), 300 I've checked that @_audio (this._audio) is in scope and that its play() method exists. Why doesn't this work on iPad?

    Read the article

  • Dangers when deploying Flash/Flex UI test automation hooks to production?

    - by Merlyn Morgan-Graham
    I am interested in doing automated testing against a Flex based UI. I have found out that my best options for UI automation (due to being C# controllable, good licensing conditions, etc) all seem to require that I compile test hooks into my application. Because of this, I am thinking of recommending that these hooks be compiled into our build. I have found a few places on the net that recommend not deploying bits with this instrumentation enabled, and I'd like to know why. Is it a performance drain, or a security risk? If it is a security risk, can you explain how the attack surface is increased? I am not a Flash or Flex developer, though I have some experience with threat modeling. For reference, here's the tools I'm specifically considering: QTP Selenium-Flex API I am having problems finding all the warnings/suggestions I found last night, but here's an example that I can find: http://www.riatest.com/products/getting-started.html Warning! Automation enabled applications expose all properties of all GUI components. This makes them vulnerable to malicious use. Never make automation enabled application publicly available. Always restrict access to such applications and to RIATest Loader to trusted users only. Related question (how to do conditional compilation to insert/remove those hooks): Conditionally including Flex libraries (SWCs) in mxmlc/compc ant tasks

    Read the article

  • How to justify using a scripting language as part of a project

    - by sylvanaar
    I have a specific project in which I want to use either a scripting language + C, or as an alternative a 100% Java solution. The program adapts a legacy system for use with other moderns systems. Basically, I have few choices as to what language I can use. I have C/C++, Java 1.4, and I have also compiled the Lua for this environment. The program does 'screen scraping' and has to deal with alot of strings. That part of the code is highly variable. Most of the developers at my company use C, so - my original design was to write some portions in C, and use Lua for the part that dealt with strings and changed freqently. I was told 'You have to justify your use of the scripting language.' So i reworked my design using 100% Java, and was told - Java wont have enough performance. You should do the whole thing in C. I'm not controlling lasers or doing image processing - just some screen scraping. I still have to provide justification for using anything but C - so what justification can I provide?

    Read the article

  • Optimizing an embedded SELECT query in mySQL

    - by Crazy Serb
    Ok, here's a query that I am running right now on a table that has 45,000 records and is 65MB in size... and is just about to get bigger and bigger (so I gotta think of the future performance as well here): SELECT count(payment_id) as signup_count, sum(amount) as signup_amount FROM payments p WHERE tm_completed BETWEEN '2009-05-01' AND '2009-05-30' AND completed > 0 AND tm_completed IS NOT NULL AND member_id NOT IN (SELECT p2.member_id FROM payments p2 WHERE p2.completed=1 AND p2.tm_completed < '2009-05-01' AND p2.tm_completed IS NOT NULL GROUP BY p2.member_id) And as you might or might not imagine - it chokes the mysql server to a standstill... What it does is - it simply pulls the number of new users who signed up, have at least one "completed" payment, tm_completed is not empty (as it is only populated for completed payments), and (the embedded Select) that member has never had a "completed" payment before - meaning he's a new member (just because the system does rebills and whatnot, and this is the only way to sort of differentiate between an existing member who just got rebilled and a new member who got billed for the first time). Now, is there any possible way to optimize this query to use less resources or something, and to stop taking my mysql resources down on their knees...? Am I missing any info to clarify this any further? Let me know... EDIT: Here are the indexes already on that table: PRIMARY PRIMARY 46757 payment_id member_id INDEX 23378 member_id payer_id INDEX 11689 payer_id coupon_id INDEX 1 coupon_id tm_added INDEX 46757 tm_added, product_id tm_completed INDEX 46757 tm_completed, product_id

    Read the article

  • Undefined Web.config error in VS 2008

    - by user1066050
    I'm working on a web app using VS 2008, .Net 3.5 and C#. Most of the projects in the solution are either classic asp.net pages with some MVC 1 in the mix, the rest is shared libraries. The solution is one that is some 5 years old and has gone through a variety of developers working on it and clearly has some performance and architectural issues. Previously, I've been working on the project using VS 2008 on a Win XP machine, but have just transitioned over to a new box using Win 7 Ultimate. To do so, I've installed VS 2008, asp.net 3.5. To support future work on the solution I've also installed VS 2010 and asp.net 4.0. Opening the solution on the new box with VS 2008 works fine, and it builds without error. However, when I attempt to run it with the debugger, I get the following message: "There is an error in web.config. Please correct before proceeding. (You might rename the current web.config and add a new one.)" I think it's clear that there is some sort of environmental issue regarding web.config on the new machine, but the error message is not "helpful". Adding a new web.config is not an option as the existing one is quite long and involved (too much to post here). I'm hoping someone has a suggestion or two about where I might look for missing elements or changed configurations that might produce such an error message. Lacking that, I'll revisit this post and provide the web.config in the hope that will elicit further help. Thanks to all in advance for taking a look at this. The StackOverflow community has helped me many times in the past with pertinent answers although this is my first posting. Jeff

    Read the article

  • AssemblyResolve event is not firing during compilation of a dynamic assembly for an aspx page.

    - by John
    This one is really pissing me off. Here goes: My goal is to load assemblies at run-time that contain embedded aspx,ascx etc. What I would also like is to not lock the assembly file on disk so I can update it at run-time without having to restart the application (I know this will leave the previous version(s) loaded). To that end I have written a virtual path provider that does the trick. I have subscribed to the CurrentDomain.AssemblyResolve event so as to redirect the framework to my assemblies. The problem is that the when the framework tries to compile the dynamic assembly for the aspx page I get the following: Compiler Error Message: CS0400: The type or namespace name 'Pages' could not be found in the global namespace (are you missing an assembly reference?) Source Error: public class app_resource_pages__version_1_0_0_0__culture_neutral__publickeytoken_null_default_aspx : global::Pages._Default, System.Web.SessionState.IRequiresSessionState, System.Web.IHttpHandle I noticed that if I load the assembly with Assembly.Load(AssemblyName) or Assembly.LoadFrom(filename) I dont get the above error. If I load it with Assembly.Load(byte[]) (so as to not lock it), the exception is thrown but my AssemblyResolve handler, when called is returning the assembly correctly (it is called once). So I am guessing that it is called once when the framework parses the asp markup but not when it tries to create the dynamic assembly for the aspx page.

    Read the article

  • How good is the memory mapped Circular Buffer on Wikipedia?

    - by abroun
    I'm trying to implement a circular buffer in C, and have come across this example on Wikipedia. It looks as if it would provide a really nice interface for anyone reading from the buffer, as reads which wrap around from the end to the beginning of the buffer are handled automatically. So all reads are contiguous. However, I'm a bit unsure about using it straight away as I don't really have much experience with memory mapping or virtual memory and I'm not sure that I fully understand what it's doing. What I think I understand is that it's mapping a shared memory file the size of the buffer into memory twice. Then, whenever data is written into the buffer it appears in memory in 2 places at once. This allows all reads to be contiguous. What would be really great is if someone with more experience of POSIX memory mapping could have a quick look at the code and tell me if the underlying mechanism used is really that efficient. Am I right in thinking for example that the file in /dev/shm used for the shared memory always stays in RAM or could it get written to the hard drive (performance hit) at some point? Are there any gotchas I should be aware of? As it stands, I'm probably going to use a simpler method for my current project, but it'd be good to understand this to have it in my toolbox for the future. Thanks in advance for your time.

    Read the article

  • ReaderWriterLockSlim and Pulse/Wait

    - by Jono
    Is there an equivalent of Monitor.Pulse and Monitor.Wait that I can use in conjunction with a ReaderWriterLockSlim? I have a class where I've encapsulated multi-threaded access to an underlying queue. To enqueue something, I acquire a lock that protects the underlying queue (and a couple of other objects) then add the item and Monitor.Pulse the locked object to signal that something was added to the queue. public void Enqueue(ITask task) { lock (mutex) { underlying.Enqueue(task); Monitor.Pulse(mutex); } } On the other end of the queue, I have a single background thread that continuously processes messages as they arrive on the queue. It uses Monitor.Wait when there are no items in the queue, to avoid unnecessary polling. (I consider this to be good design, but any flames (within reason) are welcome if they help me learn otherwise.) private void DequeueForProcessing(object state) { while (true) { ITask task; lock (mutex) { while (underlying.Count == 0) { Monitor.Wait(mutex); } task = underlying.Dequeue(); } Process(task); } } As more operations are added to this class (requiring read-only access to the lock protected underlying), someone suggested using ReaderWriterLockSlim. I've never used the class before, and assuming it can offer some performance benefit, I'm not against it, but only if I can keep the Pulse/Wait design.

    Read the article

  • Testing approach for multi-threaded software

    - by Shane MacLaughlin
    I have a piece of mature geospatial software that has recently had areas rewritten to take better advantage of the multiple processors available in modern PCs. Specifically, display, GUI, spatial searching, and main processing have all been hived off to seperate threads. The software has a pretty sizeable GUI automation suite for functional regression, and another smaller one for performance regression. While all automated tests are passing, I'm not convinced that they provide nearly enough coverage in terms of finding bugs relating race conditions, deadlocks, and other nasties associated with multi-threading. What techniques would you use to see if such bugs exist? What techniques would you advocate for rooting them out, assuming there are some in there to root out? What I'm doing so far is running the GUI functional automation on the app running under a debugger, such that I can break out of deadlocks and catch crashes, and plan to make a bounds checker build and repeat the tests against that version. I've also carried out a static analysis of the source via PC-Lint with the hope of locating potential dead locks, but not had any worthwhile results. The application is C++, MFC, mulitple document/view, with a number of threads per doc. The locking mechanism I'm using is based on an object that includes a pointer to a CMutex, which is locked in the ctor and freed in the dtor. I use local variables of this object to lock various bits of code as required, and my mutex has a time out that fires my a warning if the timeout is reached. I avoid locking where possible, using resource copies where possible instead. What other tests would you carry out?

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • MySQL Database Design with Internationalization

    - by Some name
    Hello, I'm going to start work on a medium sized application, and i'm planning it's db design. One thing that I'm not sure about is this. I will have many tables which will need internationalization, such as: "membership_options, gender_options, language_options etc" Each of these tables will share common i18n fields, like: "title, alternative_title, short_description, description" In your opinion which is the best way to do it? Have an i18n table with the same fields for each of the tables that will need them? or do something like: Membership table Gender table ---------------- -------------- id | created_at id | created_at 1 - 22.03.2001 1 - 14.08.2002 2 - 22.03.2001 2 - 14.08.2002 General translation table ------------------------- record_id | table_name | string_name | alternative_title| .... |id_language 1 - membership regular null 1 (english) 1 - membership normale null 2 (italian) 1 - gender man null 1(english) 1 -gender uomo null 2(italian) This would avoid me repeating something like: membership_translation table ----------------------------- membership_id | name | alternative_title | id_lang 1 regular null 1 1 normale null 2 gender_translation table ----------------------------- gender_id | name | alternative_title | id_lang 1 man null 1 1 uomo null 2 and so on, so i would probably reduce the number of db tables, but i'm not sure about performance.I'm not much of a DB designer, so please let me know.

    Read the article

  • How do I keep my DataService up to date with ObservableCollection?

    - by joebeazelman
    I have a class called CustomerService which simply reads a collection of customers from a file or creates one and passes it back to the Main Model View where it is turned into an ObservableCollection. What the best practice for making sure the items in the CustomerService and ObservableCollection are in sync. I'm guessing I could hookup the CustomerService object to respond to RaisePropertyChanged, but isn't this only for use with WPF controls? Is there a better way? using System; public class MainModelView { public MainModelView() { _customers = new ObservableCollection<CustomerViewModel>(new CustomerService().GetCustomers()); } public const string CustomersPropertyName = "Customers" private ObservableCollection<CustomerViewModel> _customers; public ObservableCollection<CustomerViewModel> Customers { get { return _customers; } set { if (_customers == value) { return; } var oldValue = _customers; _customers = value; // Update bindings and broadcast change using GalaSoft.MvvmLight.Messenging RaisePropertyChanged(CustomersPropertyName, oldValue, value, true); } } } public class CustomerService { /// <summary> /// Load all persons from file on disk. /// </summary> _customers = new List<CustomerViewModel> { new CustomerViewModel(new Customer("Bob", "" )), new CustomerViewModel(new Customer("Bob 2", "" )), new CustomerViewModel(new Customer("Bob 3", "" )), }; public IEnumerable<LinkViewModel> GetCustomers() { return _customers; } }

    Read the article

  • Why are my connections not closed even if I explicitly dispose of the DataContext?

    - by Chris Simpson
    I encapsulate my linq to sql calls in a repository class which is instantiated in the constructor of my overloaded controller. The constructor of my repository class creates the data context so that for the life of the page load, only one data context is used. In my destructor of the repository class I explicitly call the dispose of the DataContext though I do not believe this is necessary. Using performance monitor, if I watch my User Connections count and repeatedly load a page, the number increases once per page load. Connections do not get closed or reused (for about 20 minutes). I tried putting Pooling=false in my config to see if this had any effect but it did not. In any case with pooling I wouldn't expect a new connection for every load, I would expect it to reuse connections. I've tried putting a break point in the destructor to make sure the dispose is being hit and sure enough it is. So what's happening? Some code to illustrate what I said above: The controller: public class MyController : Controller { protected MyRepository rep; public MyController () { rep = new MyRepository(); } } The repository: public class MyRepository { protected MyDataContext dc; public MyRepository() { dc = getDC(); } ~MyRepository() { if (dc != null) { //if (dc.Connection.State != System.Data.ConnectionState.Closed) //{ // dc.Connection.Close(); //} dc.Dispose(); } } // etc } Note: I add a number of hints and context information to the DC for auditing purposes. This is essentially why I want one connection per page load

    Read the article

  • Detaching all entities of T to get fresh data

    - by Goran
    Lets take an example where there are two type of entites loaded: Product and Category, Product.CategoryId - Category.Id. We have available CRUD operations on products (not Categories). If on another screen Categories are updated (or from another user in the network), we would like to be able to reload the Categories, while preserving the context we currently use, since we could be in the middle of editing data, and we do not want changes to be lost (and we cannot depend on saving, since we have incomplete data). Since there is no easy way to tell EF to get fresh data (added, removed and modified), we thought of twp possible ways: 1) Getting products attached to context, and categories detached from context. This would mean that we loose the ability to access Product.Category.Name, which we do sometimes require, so we would need to manually resolve it (example when printing data). 2) detaching / attaching all Categories from current context. Context.ChangeTracker.Entries().Where(x => x.Entity.GetType() == typeof(T)).ForEach(x => x.State = EntityState.Detached); And then reload the categories, which will get fresh data. Do you find any problem with this second approach? We understand that this will require all constraints to be put on foreign keys, and not navigation properties, since when detaching all Categories, Product.Category navigation properties would be reset to null also. Also, there could be a potential performance problem, which we did not test, since there could be couple of thousand products loaded, and all would need to resolve navigation property when reloading. Which of the two do you prefer, and is there a better way (EF6 + .NET 4.0)?

    Read the article

  • Pump Messages During Long Operations + C#

    - by Newbie
    Hi I have a web service that is doing huge computation and is taking more than a minute. I have generated the proxy file of the web service and then from my client end I am using the dll(of course I generated the proxy dll). My client side code is TimeSeries3D t = new TimeSeries3D(); int portfolioId = 4387919; string[] str = new string[2]; str[0] = "MKT_CAP"; DateRange dr = new DateRange(); dr.mStartDate = DateTime.Today; dr.mEndDate = DateTime.Today; Service1 sc = new Service1(); t = sc.GetAttributesForPortfolio(portfolioId, true, str, dr); But since it is taking to much time for the server to compute, after 1 minute I am receiving an error message The CLR has been unable to transition from COM context 0x33caf30 to COM context 0x33cb0a0 for 60 seconds. The thread that owns the destination context/apartment is most likely either doing a non pumping wait or processing a very long running operation without pumping Windows messages. This situation generally has a negative performance impact and may even lead to the application becoming non responsive or memory usage accumulating continually over time. To avoid this problem, all single threaded apartment (STA) threads should use pumping wait primitives (such as CoWaitForMultipleHandles) and routinely pump messages during long running operations. Kindly guide me what to do? Thanks

    Read the article

  • AppFabric caching's local cache isnt working for us... What are we doing wrong?

    - by Olly
    We are using appfabric as the 2ndlevel cache for an NHibernate asp.net application comprising a customer facing website and an admin website. They are both connected to the same cache so when admin updates something, the customer facing site is updated. It seems to be working OK - we have a CacheCLuster on a seperate server and all is well but we want to enable localcache to get better performance, however, it dosnt seem to be working. We have enabled it like this... bool UseLocalCache = int LocalCacheObjectCount = int.MaxValue; TimeSpan LocalCacheDefaultTimeout = TimeSpan.FromMinutes(3); DataCacheLocalCacheInvalidationPolicy LocalCacheInvalidationPolicy = DataCacheLocalCacheInvalidationPolicy.TimeoutBased; if (UseLocalCache) { configuration.LocalCacheProperties = new DataCacheLocalCacheProperties( LocalCacheObjectCount, LocalCacheDefaultTimeout, LocalCacheInvalidationPolicy ); // configuration.NotificationProperties = new DataCacheNotificationProperties(500, TimeSpan.FromSeconds(300)); } Initially we tried using a timeout invalidation policy (3mins) and our app felt like it was running faster. HOWEVER, we noticed that if we changed something in the admin site, it was immediatley updated in the live site. As we are using timeouts not notifications, this demonstrates that the local cache isnt being queried (or is, but is always missing). The cache.GetType().Name returns "LocalCache" - so the factory has made a local cache. Running "Get-Cache-Statistics MyCache" in PS on my dev environment (asp.net app running local from vs2008, cache cluster running on a seperate w2k8 machine) show a handful of Request Counts. However, on the Production environment, the Request Count increases dramaticaly. We tried following the method here to se the cache cliebt-server traffic... http://blogs.msdn.com/b/appfabriccat/archive/2010/09/20/appfabric-cache-peeking-into-client-amp-server-wcf-communication.aspx but the log file had nothing but the initial header in it - i.e no loggin either. I cant find anything in SO or Google. Have we done something wrong? Have we got a screwy install of AppFabric - we installed it via WebPlatform Installer - I think? (note: the IIS box running ASp.net isnt in yhe cluster - it is just the client). Any insights greatfully received!

    Read the article

< Previous Page | 753 754 755 756 757 758 759 760 761 762 763 764  | Next Page >