Search Results

Search found 5262 results on 211 pages for 'operation'.

Page 190/211 | < Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >

  • QueryInterface fails at casting inside COM-interface implementation

    - by brecht
    I am creating a tool in c# to retrieve messages of a CAN-network (network in a car) using an Dll written in C/C++. This dll is usable as a COM-interface. My c#-formclass implements one of these COM-interfaces. And other variables are instantiated using these COM-interfaces (everything works perfect). The problem: The interface my C#-form implements has 3 abstract functions. One of these functions is called -by the dll- and i need to implement it myself. In this function i wish to retrieve a property of a form-wide variable that is of a COM-type. The COM library is CANSUPPORTLib The form-wide variable: private CANSUPPORTLib.ICanIOEx devices = new CANSUPPORTLib.CanIO(); This variable is also form-wide and is retrieved via the devices-variable: canreceiver = (CANSUPPORTLib.IDirectCAN2)devices.get_DirectDispatch(receiverLogicalChannel); The function that is called by the dll and implemented in c# public void Message(double dTimeStamp) { Console.WriteLine("!!! message ontvangen !!!" + Environment.NewLine); try { CANSUPPORTLib.can_msg_tag message = new CANSUPPORTLib.can_msg_tag(); message = (CANSUPPORTLib.can_msg_tag) System.Runtime.InteropServices.Marshal.PtrToStructure(canreceiver.RawMessage, message.GetType()); for (int i = 0; i < message.data.Length; i++) { Console.WriteLine("byte " + i + ": " + message.data[i]); } } catch (Exception e) { Console.WriteLine(e.Message); } } The error rises at this line: message = (CANSUPPORTLib.can_msg_tag)System.Runtime.InteropServices.Marshal.PtrToStructure(canreceiver.RawMessage, message.GetType()); Error: Unable to cast COM object of type 'System.__ComObject' to interface type 'CANSUPPORTLib.IDirectCAN2'. This operation failed because the QueryInterface call on the COM component for the interface with IID '{33373EFC-DB42-48C4-A719-3730B7F228B5}' failed due to the following error: No such interface supported (Exception from HRESULT: 0x80004002 (E_NOINTERFACE)). Notes: It is possible to have a timer-clock that checks every 100ms for the message i need. The message is then retrieved in the exact same way as i do now. This timer is started when the form starts. The checking is only done when Message(double) has put a variable to true (a message arrived). When the timer-clock is started in the Message function, i have the same error as above Starting another thread when the form starts, is also not possible. Is there someone with experience with COM-interop ? When this timer

    Read the article

  • Bit Reversal using bitwise

    - by Yongwei Xing
    Hi all I am trying to do bit reversal in a byte. I use the code below static int BitReversal(int n) { int u0 = 0x55555555; // 01010101010101010101010101010101 int u1 = 0x33333333; // 00110011001100110011001100110011 int u2 = 0x0F0F0F0F; // 00001111000011110000111100001111 int u3 = 0x00FF00FF; // 00000000111111110000000011111111 int x, y, z; x = n; y = (x >> 1) & u0; z = (x & u0) << 1; x = y | z; y = (x >> 2) & u1; z = (x & u1) << 2; x = y | z; y = (x >> 4) & u2; z = (x & u2) << 4; x = y | z; y = (x >> 8) & u3; z = (x & u3) << 8; x = y | z; y = (x >> 16) & u4; z = (x & u4) << 16; x = y | z; return x; } It can reverser the bit (on a 32-bit machine), but there is a problem, For example, the input is 10001111101, I want to get 10111110001, but this method would reverse the whole byte including the heading 0s. The output is 10111110001000000000000000000000. Is there any method to only reverse the actual number? I do not want to convert it to string and reverser, then convert again. Is there any pure math method or bit operation method? Best Regards,

    Read the article

  • Dojo extending dojo.dnd.Source, move not happening. Ideas?

    - by Soulhuntre
    Hey all, Ok... I have a simple dojo page with the bare essentials. Three UL's with some LI's in them. The idea si to allow drag-n-drop among them but if any UL goes empty due to the last item being dragged out, I will put up a message to the user to gie them some instructions. In order to do that, I wanted to extend the dojo.dnd.Source dijit and add some intelligence. It seemed easy enough. To keep things simple (I am loading Dojo from a CDN) I am simply declating my extension as opposed to doing full on module load. The declaration function is here... function declare_mockupSmartDndUl(){ dojo.require("dojo.dnd.Source"); dojo.provide("mockup.SmartDndUl"); dojo.declare("mockup.SmartDndUl", dojo.dnd.Source, { markupFactory: function(params, node){ //params._skipStartup = true; return new mockup.SmartDndUl(node, params); }, onDndDrop: function(source, nodes, copy){ console.debug('onDndDrop!'); if(this == source){ // reordering items console.debug('moving items from us'); // DO SOMETHING HERE }else{ // moving items to us console.debug('moving items to us'); // DO SOMETHING HERE } console.debug('this = ' + this ); console.debug('source = ' + source ); console.debug('nodes = ' + nodes); console.debug('copy = ' + copy); return dojo.dnd.Source.prototype.onDndDrop.call(this, source, nodes, copy); } }); } I have a init function to use this to decorate the lists... dojo.addOnLoad(function(){ declare_mockupSmartDndUl(); if(dojo.byId('list1')){ //new mockup.SmartDndUl(dojo.byId('list1')); new dojo.dnd.Source(dojo.byId('list1')); } if(dojo.byId('list2')){ new mockup.SmartDndUl(dojo.byId('list2')); //new dojo.dnd.Source(dojo.byId('list2')); } if(dojo.byId('list3')){ new mockup.SmartDndUl(dojo.byId('list3')); //new dojo.dnd.Source(dojo.byId('list3')); } }); It is fine as far as it goes, you will notice I left "list1" as a standard dojo dnd source for testing. The problem is this - list1 will happily accept items from lists 2 & 3 who will move or copy as apprriate. However lists 2 & 3 refuce to accept items from list1. It is as if the DND operation is being cancelled, but the debugger does show the dojo.dnd.Source.prototype.onDndDrop.call happening, and the paramaters do look ok to me. Now, the documentation here is really weak, so the example I took some of this from may be way out of date (I am using 1.4). Can anyone fill me in on what might be the issue with my extension dijit? Thanks!

    Read the article

  • How to best design a date/geographic proximity query on GAE?

    - by Dane
    Hi all, I'm building a directory for finding athletic tournaments on GAE with web2py and a Flex front end. The user selects a location, a radius, and a maximum date from a set of choices. I have a basic version of this query implemented, but it's inefficient and slow. One way I know I can improve it is by condensing the many individual queries I'm using to assemble the objects into bulk queries. I just learned that was possible. But I'm also thinking about a more extensive redesign that utilizes memcache. The main problem is that I can't query the datastore by location because GAE won't allow multiple numerical comparison statements (<,<=,=,) in one query. I'm already using one for date, and I'd need TWO to check both latitude and longitude, so it's a no go. Currently, my algorithm looks like this: 1.) Query by date and select 2.) Use destination function from geopy's distance module to find the max and min latitude and longitudes for supplied distance 3.) Loop through results and remove all with lat/lng outside max/min 4.) Loop through again and use distance function to check exact distance, because step 2 will include some areas outside the radius. Remove results outside supplied distance (is this 2/3/4 combination inefficent?) 5.) Assemble many-to-many lists and attach to objects (this is where I need to switch to bulk operations) 6.) Return to client Here's my plan for using memcache.. let me know if I'm way out in left field on this as I have no prior experience with memcache or server caching in general. -Keep a list in the cache filled with "geo objects" that represent all my data. These have five properties: latitude, longitude, event_id, event_type (in anticipation of expanding beyond tournaments), and start_date. This list will be sorted by date. -Also keep a dict of pointers in the cache which represent the start and end indices in the cache for all the date ranges my app uses (next week, 2 weeks, month, 3 months, 6 months, year, 2 years). -Have a scheduled task that updates the pointers daily at 12am. -Add new inserts to the cache as well as the datastore; update pointers. Using this design, the algorithm would now look like: 1.) Use pointers to slice off appropriate chunk of list based on supplied date. 2-4.) Same as above algorithm, except with geo objects 5.) Use bulk operation to select full tournaments using remaining geo objects' event_ids 6.) Assemble many-to-manys 7.) Return to client Thoughts on this approach? Many thanks for reading and any advice you can give. -Dane

    Read the article

  • Unity and Object Creation

    - by William
    I am using unity as my IoC container. I am trying to implement a type of IProviderRepository. The concrete implementation has a constructor that accepts a type of IRepository. When I remove the constructor parameter from the concrete implementation everything works fine. I am sure the container is wired correctly. When I try to create the concrete object with the constructor I receive the following error: "The current build operation (build key Build Key[EMRGen.Infrastructure.Data.IRepository1[EMRGen.Model.Provider.Provider], null]) failed: The current type, EMRGen.Infrastructure.Data.IRepository1[EMRGen.Model.Provider.Provider], is an interface and cannot be constructed. Are you missing a type mapping? (Strategy type BuildPlanStrategy, index 3)". Is it possible to achieve the above mention functionality with Unity? Namely have Unity infer a concrete type from the Interface and also inject the constructor of the concrete type with the appropriate concrete object based on constructor parameters. Below is sample of my types defined in Unity and a skeleton class listing for what I want to achieve. IProviderRepository is implemented by ProviderRepository which has a constructor that expects a type of IRepository. <typeAlias alias="ProviderRepositoryInterface" type="EMRGen.Model.Provider.IProviderRepository, EMRGen.Model" /> <typeAlias alias="ProviderRepositoryConcrete" type="EMRGen.Infrastructure.Repositories.Providers.ProviderRepository, EMRGen.Infrastructure.Repositories" /> <typeAlias alias="ProviderGenericRepositoryInterface" type="EMRGen.Infrastructure.Data.IRepository`1[[EMRGen.Model.Provider.IProvider, EMRGen.Model]], EMRGen.Infrastructure" /> <typeAlias alias="ProviderGenericRepositoryConcrete" type="EMRGen.Infrastructure.Repositories.EntityFramework.ApplicationRepository`1[[EMRGen.Model.Provider.Provider, EMRGen.Model]], EMRGen.Infrastructure.Repositories" /> <!-- Provider Mapping--> <typeAlias alias="ProviderInterface" type="EMRGen.Model.Provider.IProvider, EMRGen.Model" /> <typeAlias alias="ProviderConcrete" type="EMRGen.Model.Provider.Doctor, EMRGen.Model" /> //Illustrate the call being made inside my class public class PrescriptionService { PrescriptionService() { IUnityContainer uc = UnitySingleton.Instance.Container; UnityServiceLocator unityServiceLocator = new UnityServiceLocator(uc); ServiceLocator.SetLocatorProvider(() => unityServiceLocator); IProviderRepository pRepository = ServiceLocator.Current.GetInstance<IProviderRepository>(); } } public class GenericRepository<IProvider> : IRepository<IProvider> { } public class ProviderRepository : IProviderRepository { private IRepository<IProvider> _genericProviderRepository; //Explict public default constructor public ProviderRepository(IRepository<IProvider> genericProviderRepository) { _genericProviderRepository = genericProviderRepository; } }

    Read the article

  • javascript problem on tab control

    - by mahfuz
    my tab control have four itmes. Each tab item are individual usercontrol and they have javascript method.*But always tab item1 index=0 works well*,rest of item javascript don't work.But individually they work well ,problem arise when i put them in tab control.*when i click rest of tab items javascript function Tab item1 index=0 ascx* page always load . server side event C# code works well for all tab items.....How to solve this problem .....What's the problem is? i use devespress tool ..... <table> <tr> <td> <dxtc:ASPxPageControl Width="500px" ID="ASPxPageControl1" runat="server" ActiveTabIndex="0" EnableCallbackCompression="True" EnableHierarchyRecreation="True" AutoPostBack="True"> <TabPages> <dxtc:TabPage Text="Charge Company"> <ContentCollection> <dxw:ContentControl ID="ContentControl1" runat="server"> <uc1:UCConfig_Charge_Company_Wise ID="UCConfig_Charge_Company_Wise" runat="server" /> </dxw:ContentControl> </ContentCollection> </dxtc:TabPage> <dxtc:TabPage Text="Charge Depository Company"> <ContentCollection> <dxw:ContentControl ID="ContentControl3" runat="server"> <uc2:UCConfig_Charge_Depository_Company_Wise ID="UCConfig_Charge_Depository_Company_Wise" runat="server" /> </dxw:ContentControl> </ContentCollection> </dxtc:TabPage> <dxtc:TabPage Text="Investor Charge "> <ContentCollection> <dxw:ContentControl ID="ContentControl5" runat="server"> <uc4:UCConfig_Investor_Account_Wise_Charge ID="UCConfig_Investor_Account_Wise_Charge" runat="server" /> </dxw:ContentControl> </ContentCollection> </dxtc:TabPage> <dxtc:TabPage Text="Charge Operation Mode "> <ContentCollection> <dxw:ContentControl ID="ContentControl4" runat="server"> <uc3:UCconfig_charge_operation_mode ID="UCconfig_charge_operation_mode" runat="server" /> </dxw:ContentControl> </ContentCollection> </dxtc:TabPage> </TabPages> </dxtc:ASPxPageControl> </td> </tr> </table> my first tab control item works well.but rest of them create problem.i need to call javascript all of them but calling javascript from others create problem ,they show me bellow error Microsoft JScript runtime error: 'this.GetStateInput().value' is null or not an object

    Read the article

  • How to avoid using duplicate savepoint names in nested transactions in nested stored procs?

    - by Gary McGill
    I have a pattern that I almost always follow, where if I need to wrap up an operation in a transaction, I do this: BEGIN TRANSACTION SAVE TRANSACTION TX -- Stuff IF @error <> 0 ROLLBACK TRANSACTION TX COMMIT TRANSACTION That's served me well enough in the past, but after years of using this pattern (and copy-pasting the above code), I've suddenly discovered a flaw which comes as a complete shock. Quite often, I'll have a stored procedure calling other stored procedures, all of which use this same pattern. What I've discovered (to my cost) is that because I'm using the same savepoint name everywhere, I can get into a situation where my outer transaction is partially committed - precisely the opposite of the atomicity that I'm trying to achieve. I've put together an example that exhibits the problem. This is a single batch (no nested stored procs), and so it looks a little odd in that you probably wouldn't use the same savepoint name twice in the same batch, but my real-world scenario would be too confusing to post. CREATE TABLE Test (test INTEGER NOT NULL) BEGIN TRAN SAVE TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (1) COMMIT TRAN TX BEGIN TRAN SAVE TRAN TX INSERT INTO Test(test) VALUES (2) COMMIT TRAN TX DELETE FROM Test ROLLBACK TRAN TX COMMIT TRAN TX SELECT * FROM Test DROP TABLE Test When I execute this, it lists one record, with value "1". In other words, even though I rolled back my outer transaction, a record was added to the table. What's happening is that the ROLLBACK TRANSACTION TX at the outer level is rolling back as far as the last SAVE TRANSACTION TX at the inner level. Now that I write this all out, I can see the logic behind it: the server is looking back through the log file, treating it as a linear stream of transactions; it doesn't understand the nesting/hierarchy implied by either the nesting of the transactions (or, in my real-world scenario, by the calls to other stored procedures). So, clearly, I need to start using unique savepoint names instead of blindly using "TX" everywhere. But - and this is where I finally get to the point - is there a way to do this in a copy-pastable way so that I can still use the same code everywhere? Can I auto-generate the savepoint name on the fly somehow? Is there a convention or best-practice for doing this sort of thing? It's not exactly hard to come up with a unique name every time you start a transaction (could base it off the SP name, or somesuch), but I do worry that eventually there would be a conflict - and you wouldn't know about it because rather than causing an error it just silently destroys your data... :-(

    Read the article

  • Migration Core Data error code 134130

    - by magichero
    I want to do migration with 2 core database. I have read apple developer document. For the first database, I add some attributes (string, integer and date properties) to new version database. And follow all steps, I have done migration successfully with the first once. But the second database, I also add attributes (string, integer, date, transformable and binary-data properties) to new version database. And follow all steps (like the first database) but system return an error (134130). Here is the code: if (persistentStoreCoordinator_) { PMReleaseSafely(persistentStoreCoordinator_); } // Notify NSNotificationCenter *nc = [NSNotificationCenter defaultCenter]; [nc postNotificationName:GCalWillMigrationNotification object:self]; // NSString *sourceStoreType = NSSQLiteStoreType; NSString *dataStorePath = [PMUtility dataStorePathForName:GCalDBWarehousePersistentStoreName]; NSURL *storeURL = [NSURL fileURLWithPath:dataStorePath]; BOOL storeExists = [[NSFileManager defaultManager] fileExistsAtPath:dataStorePath]; // NSError *error = nil; NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], NSMigratePersistentStoresAutomaticallyOption, [NSNumber numberWithBool:YES], NSInferMappingModelAutomaticallyOption, nil]; persistentStoreCoordinator_ = [[NSPersistentStoreCoordinator alloc] initWithManagedObjectModel:self.managedObjectModel]; [persistentStoreCoordinator_ addPersistentStoreWithType:sourceStoreType configuration:nil URL:storeURL options:options error:&error]; if (error != nil) { abort(); } error is not nil and below is log: Error Domain=NSCocoaErrorDomain Code=134130 "The operation couldn\u2019t be completed. (Cocoa error 134130.)" UserInfo=0x856f790 {URL=file://localhost/Users/greensun/Library/Application%20Support/iPhone%20Simulator/5.0/Applications/D10712DE-D9FE-411A-8182-C4F58C60EC6D/Library/Application%20Support/Promise%20Mail/GCalDBWarehouse.sqlite, metadata={type = immutable dict, count = 7, entries = 2 : {contents = "NSStoreModelVersionIdentifiers"} = {type = immutable, count = 1, values = ( 0 : {contents = ""} )} 4 : {contents = "NSPersistenceFrameworkVersion"} = {value = +386, type = kCFNumberSInt64Type} 6 : {contents = "NSStoreModelVersionHashes"} = {type = immutable dict, count = 2, entries = 0 : {contents = "SyncEvent"} = {length = 32, capacity = 32, bytes = 0xfdae355f55c13fbd0344415fea26c8bb ... 4c1721aadd4122aa} 1 : {contents = "ImportEvent"} = {length = 32, capacity = 32, bytes = 0x7676888f0d7eaff4d1f844343028ce02 ... 040af6cbe8c5fd01} } 7 : {contents = "NSStoreUUID"} = {contents = "51678BAC-CCFB-4D00-AF5C-8FA1BEDA6440"} 8 : {contents = "NSStoreType"} = {contents = "SQLite"} 9 : {contents = "_NSAutoVacuumLevel"} = {contents = "2"} 10 : {contents = "NSStoreModelVersionHashesVersion"} = {value = +3, type = kCFNumberSInt32Type} }, reason=Can't find model for source store} I try a lot of solutions but it does not work. I just add more attributes to 2 new version database, and succeed in migrating once. Thanks for reading and your help.

    Read the article

  • RIA Services Repository Save does not work!?

    - by Savvas Sopiadis
    Hello everybody! Doing my first SL4 MVVM RIA based application and i ran into the following situation: updating a record (EF4,NO-POCOS!!) in the SL-client seems to take place, but values in the dbms are unchanged. Debugging with Fiddler the message on save is (amongst others): EntityActions.nil? b9http://schemas.microsoft.com/2003/10/Serialization/Arrays^HasMemberChanges?^Id?^ Operation?Update I assume that this says only: hey! the dbms should do an update on this record, AND nothing more! Is that right?! I 'm using a generic repository like this: public class Repository<T> : IRepository<T> where T : class { IObjectSet<T> _objectSet; IObjectContext _objectContext; public Repository(IObjectContext objectContext) { this._objectContext = objectContext; _objectSet = objectContext.CreateObjectSet<T>(); } public IQueryable<T> AsQueryable() { return _objectSet; } public IEnumerable<T> GetAll() { return _objectSet.ToList(); } public IEnumerable<T> Find(Expression<Func<T, bool>> where) { return _objectSet.Where(where); } public T Single(Expression<Func<T, bool>> where) { return _objectSet.Single(where); } public T First(Expression<Func<T, bool>> where) { return _objectSet.First(where); } public void Delete(T entity) { _objectSet.DeleteObject(entity); } public void Add(T entity) { _objectSet.AddObject(entity); } public void Attach(T entity) { _objectSet.Attach(entity); } public void Save() { _objectContext.SaveChanges(); } } The DomainService Update Method is the following: [Update] public void UpdateCulture(Culture currentCulture) { if (currentCulture.EntityState == System.Data.EntityState.Detached) { this.cultureRepository.Attach(currentCulture); } this.cultureRepository.Save(); } I know that the currentCulture-Entity is detached. What confuses me (amongst other things) is this: is the _objectContext still alive? (which means it "will be"??? aware of the changes made to record, so simply calling Attach() and then Save() should be enough!?!?) What am i missing? Development Environment: VS2010RC - Entity Framework 4 (no POCOs) Thanks in advance

    Read the article

  • Why are there connections open to my databases?

    - by Everett
    I have a program that stores user projects as databases. Naturally, the program should allow the user to create and delete the databases as they need to. When the program boots up, it looks for all the databases in a specific SQLServer instance that have the structure the program is expecting. These database are then loaded into a listbox so the user can pick one to open as a project to work on. When I try to delete a database from the program, I always get an SQL error saying that the database is currently open and the operation fails. I've determined that the code that checks for the databases to load is causing the problem. I'm not sure why though, because I'm quite sure that all the connections are being properly closed. Here are all the relevant functions. After calling BuildProjectList, running "DROP DATABASE database_name" from ExecuteSQL fails with the message: "Cannot drop database because it is currently in use". I'm using SQLServer 2005. private SqlConnection databaseConnection; private string connectionString; private ArrayList databases; public ArrayList BuildProjectList() { //databases is an ArrayList of all the databases in an instance if (databases.Count <= 0) { return null; } ArrayList databaseNames = new ArrayList(); for (int i = 0; i < databases.Count; i++) { string db = databases[i].ToString(); connectionString = "Server=localhost\\SQLExpress;Trusted_Connection=True;Database=" + db + ";"; //Check if the database has the table required for the project string sql = "select * from TableExpectedToExist"; if (ExecuteSQL(sql)) { databaseNames.Add(db); } } return databaseNames; } private bool ExecuteSQL(string sql) { bool success = false; openConnection(); SqlCommand cmd = new SqlCommand(sql, databaseConnection); try { cmd.ExecuteNonQuery(); success = true; } catch (SqlException ae) { MessageBox.Show(ae.Message.ToString()); } closeConnection(); return success; } public void openConnection() { databaseConnection = new SqlConnection(connectionString); try { databaseConnection.Open(); } catch(Exception e) { MessageBox.Show(e.ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } public void closeConnection() { if (databaseConnection != null) { try { databaseConnection.Close(); } catch (Exception e) { MessageBox.Show(e.ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } }

    Read the article

  • How to configure maximum number of transport channels in WCF using basicHttpBinding?

    - by Hemant
    Consider following code which is essentially a WCF host: [ServiceContract (Namespace = "http://www.mightycalc.com")] interface ICalculator { [OperationContract] int Add (int aNum1, int aNum2); } [ServiceBehavior (InstanceContextMode = InstanceContextMode.PerCall)] class Calculator: ICalculator { public int Add (int aNum1, int aNum2) { Thread.Sleep (2000); //Simulate a lengthy operation return aNum1 + aNum2; } } class Program { static void Main (string[] args) { try { using (var serviceHost = new ServiceHost (typeof (Calculator))) { var httpBinding = new BasicHttpBinding (BasicHttpSecurityMode.None); serviceHost.AddServiceEndpoint (typeof (ICalculator), httpBinding, "http://172.16.9.191:2221/calc"); serviceHost.Open (); Console.WriteLine ("Service is running. ENJOY!!!"); Console.WriteLine ("Type 'stop' and hit enter to stop the service."); Console.ReadLine (); if (serviceHost.State == CommunicationState.Opened) serviceHost.Close (); } } catch (Exception e) { Console.WriteLine (e); Console.ReadLine (); } } } Also the WCF client program is: class Program { static int COUNT = 0; static Timer timer = null; static void Main (string[] args) { var threads = new Thread[10]; for (int i = 0; i < threads.Length; i++) { threads[i] = new Thread (Calculate); threads[i].Start (null); } timer = new Timer (o => Console.WriteLine ("Count: {0}", COUNT), null, 1000, 1000); Console.ReadLine (); timer.Dispose (); } static void Calculate (object state) { var c = new CalculatorClient ("BasicHttpBinding_ICalculator"); c.Open (); while (true) { try { var sum = c.Add (2, 3); Interlocked.Increment (ref COUNT); } catch (Exception ex) { Console.WriteLine ("Error on thread {0}: {1}", Thread.CurrentThread.Name, ex.GetType ()); break; } } c.Close (); } } Basically, I am creating 10 proxy clients and then repeatedly calling Add service method. Now if I run both applications and observe opened TCP connections using netstat, I find that: If both client and server are running on same machine, number of tcp connections are equal to number of proxy objects. It means all requests are being served in parallel. Which is good. If I run server on a separate machine, I observed that maximum 2 TCP connections are opened regardless of the number of proxy objects I create. Only 2 requests run in parallel. It hurts the processing speed badly. If I switch to net.tcp binding, everything works fine (a separate TCP connection for each proxy object even if they are running on different machines). I am very confused and unable to make the basicHttpBinding use more TCP connections. I know it is a long question, but please help!

    Read the article

  • Cannot install Apache Web Server on Ubuntu, Amazon WS

    - by Eugene Retunsky
    I enter command apt-get install apache2 --fix-missing (under the root user) and this is what I receive: Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert Suggested packages: apache2-doc apache2-suexec apache2-suexec-custom openssl-blacklist The following NEW packages will be installed: apache2 apache2-mpm-worker apache2-utils apache2.2-bin apache2.2-common libapr1 libaprutil1 libaprutil1-dbd-sqlite3 libaprutil1-ldap ssl-cert 0 upgraded, 10 newly installed, 0 to remove and 36 not upgraded. Need to get 2,945 kB/3,141 kB of archives. After this operation, 10.4 MB of additional disk space will be used. Do you want to continue [Y/n]? y Err http://us-west-1.ec2.archive.ubuntu.com/ubuntu/ oneiric-updates/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 10.161.51.124 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-bin i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-utils i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2.2-common i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2-mpm-worker i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Err http://security.ubuntu.com/ubuntu/ oneiric-security/main apache2 i386 2.2.20-1ubuntu1.1 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-bin_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-utils_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2.2-common_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2-mpm-worker_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Failed to fetch http://security.ubuntu.com/ubuntu/pool/main/a/apache2/apache2_2.2.20-1ubuntu1.1_i386.deb 404 Not Found [IP: 91.189.92.167 80] Unable to correct missing packages. E: Aborting install. Any help is appreciated.

    Read the article

  • Accurately display upload progress in Silverilght upload

    - by Matt
    I'm trying to debug a file upload / download issue I'm having. I've got a Silverlight file uploader, and to transmit the files I make use of the HttpWebRequest class. So I create a connection to my file upload handler on the server and begin transmitting. While a file uploads I keep track of total bytes written to the RequestStream so I can figure out a percentage. Now working at home I've got a rather slow connection, and I think Silverlight, or the browser, is lying to me. It seems that my upload progress logic is inaccurate. When I do multiple file uploads (24 images of 3-6mb big in my testing), the logic reports the files finish uploading but I believe that it only reflects the progress of written bytes to the RequestStream, not the actual amount of bytes uploaded. What is the most accurate way to measure upload progress. Here's the logic I'm using. public void Upload() { if( _TargetFile != null ) { Status = FileUploadStatus.Uploading; Abort = false; long diff = _TargetFile.Length - BytesUploaded; UriBuilder ub = new UriBuilder( App.siteUrl + "upload.ashx" ); bool complete = diff <= ChunkSize; ub.Query = string.Format( "{3}name={0}&StartByte={1}&Complete={2}", fileName, BytesUploaded, complete, string.IsNullOrEmpty( ub.Query ) ? "" : ub.Query.Remove( 0, 1 ) + "&" ); HttpWebRequest webrequest = ( HttpWebRequest ) WebRequest.Create( ub.Uri ); webrequest.Method = "POST"; webrequest.BeginGetRequestStream( WriteCallback, webrequest ); } } private void WriteCallback( IAsyncResult asynchronousResult ) { HttpWebRequest webrequest = ( HttpWebRequest ) asynchronousResult.AsyncState; // End the operation. Stream requestStream = webrequest.EndGetRequestStream( asynchronousResult ); byte[] buffer = new Byte[ 4096 ]; int bytesRead = 0; int tempTotal = 0; Stream fileStream = _TargetFile.OpenRead(); fileStream.Position = BytesUploaded; while( ( bytesRead = fileStream.Read( buffer, 0, buffer.Length ) ) != 0 && tempTotal + bytesRead < ChunkSize && !Abort ) { requestStream.Write( buffer, 0, bytesRead ); requestStream.Flush(); BytesUploaded += bytesRead; tempTotal += bytesRead; int percent = ( int ) ( ( BytesUploaded / ( double ) _TargetFile.Length ) * 100 ); UploadPercent = percent; if( UploadProgressChanged != null ) { UploadProgressChangedEventArgs args = new UploadProgressChangedEventArgs( percent, bytesRead, BytesUploaded, _TargetFile.Length, _TargetFile.Name ); SmartDispatcher.BeginInvoke( () => UploadProgressChanged( this, args ) ); } } //} // only close the stream if it came from the file, don't close resizestream so we don't have to resize it over again. fileStream.Close(); requestStream.Close(); webrequest.BeginGetResponse( ReadCallback, webrequest ); }

    Read the article

  • c# Uploading files to ftp server

    - by etarvt
    Hi there, I have a problem with uploading files to ftp server. I have a few buttons. Every button uploads different files to the ftp. The first time when a button is clicked the file is uploaded successfully, but the second and later tries fail. It gives me "The operation has timed out". When I close the site and then open it again I can upload again only one file. I am sure that I can override files on the ftp. Here's the code: protected void btn_export_OnClick(object sender, EventArgs e) { Stream stream = new MemoryStream(); stream.Position = 0; // fill the stream bool res = this.UploadFile(stream, "test.csv", "dir"); stream.Close(); } private bool UploadFile(Stream stream, string filename, string ftp_dir) { stream.Seek(0, SeekOrigin.Begin); string uri = String.Format("ftp://{0}/{1}/{2}", "host", ftp_dir, filename); try { FtpWebRequest reqFTP = (FtpWebRequest)FtpWebRequest.Create(new Uri(uri)); reqFTP.Credentials = new NetworkCredential("user", "pass"); reqFTP.Method = WebRequestMethods.Ftp.UploadFile; reqFTP.KeepAlive = false; reqFTP.UseBinary = true; reqFTP.UsePassive = true; reqFTP.ContentLength = stream.Length; reqFTP.EnableSsl = true; // it's FTPES type of ftp int buffLen = 2048; byte[] buff = new byte[buffLen]; int contentLen; try { Stream ftpStream = reqFTP.GetRequestStream(); contentLen = stream.Read(buff, 0, buffLen); while (contentLen != 0) { ftpStream.Write(buff, 0, contentLen); contentLen = stream.Read(buff, 0, buffLen); } ftpStream.Flush(); ftpStream.Close(); } catch (Exception exc) { this.lbl_error.Text = "Error:<br />" + exc.Message; this.lbl_error.Visible = true; return false; } } catch (Exception exc) { this.lbl_error.Text = "Error:<br />" + exc.Message; this.lbl_error.Visible = true; return false; } return true; } Does anyone have idea what may cause this strange behaviour? I think I'm closing all streams accurately. Could this be related with the ftp server settings? The admin said that the ftp handshake never happened second time.

    Read the article

  • LINQ to SQL: To Attach or Not To Attach

    - by bradhe
    So I'm have a really hard time figuring out when I should be attaching to an object and when I shouldn't be attaching to an object. First thing's first, here is a small diagram of my (very simplified) object model. Edit: Okay, apparently I'm not allowed to post images...here you go: http://i.imgur.com/2ROFI.png In my DAL I create a new DataContext every time I do a data-related operation. Say, for instance, I want to save a new user. In my business layer I create a new user. var user = new User(); user.FirstName = "Bob"; user.LastName = "Smith"; user.Username = "bob.smith"; user.Password = StringUtilities.EncodePassword("MyPassword123"); user.Organization = someOrganization; // Assume that someOrganization was loaded and it's data context has been garbage collected. Now I want to go save this user. var userRepository = new RepositoryFactory.GetRepository<UserRepository>(); userRepository.Save(user); Neato! Here is my save logic: public void Save(User user) { if (!DataContext.Users.Contains(user)) { user.Id = Guid.NewGuid(); user.CreatedDate = DateTime.Now; user.Disabled = false; //DataContext.Organizations.Attach(user.Organization); DataContext.Users.InsertOnSubmit(user); } else { DataContext.Users.Attach(user); } DataContext.SubmitChanges(); // Finished here as well. user.Detach(); } So, here we are. You'll notice that I comment out the bit where the DataContext attachs to the organization. If I attach to the organization I get the following exception: NotSupportedException: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported. Hmm, that doesn't work. Let me try it without attaching (i.e. comment out that line about attaching to the organization). DuplicateKeyException: Cannot add an entity with a key that is already in use. WHAAAAT? I can only assume this is trying to insert a new organization which is obviously false. So, what's the deal guys? What should I do? What is the proper approach? It seems like L2S makes this quite a bit harder than it should be...

    Read the article

  • jquery data selector

    - by Tauren
    I need to select elements based on values stored in an element's .data() object. At a minimum, I'd like to select top-level data properties using selectors, perhaps like this: $('a').data("category","music"); $('a:data(category=music)'); Or perhaps the selector would be in regular attribute selector format: $('a[category=music]'); Or in attribute format, but with a specifier to indicate it is in .data(): $('a[:category=music]'); I've found James Padolsey's implementation to look simple, yet good. The selector formats above mirror methods shown on that page. There is also this Sizzle patch. For some reason, I recall reading a while back that jQuery 1.4 would include support for selectors on values in the jquery .data() object. However, now that I'm looking for it, I can't find it. Maybe it was just a feature request that I saw. Is there support for this and I'm just not seeing it? Ideally, I'd like to support sub-properties in data() using dot notation. Like this: $('a').data("user",{name: {first:"Tom",last:"Smith"},username: "tomsmith"}); $('a[:user.name.first=Tom]'); I also would like to support multiple data selectors, where only elements with ALL specified data selectors are found. The regular jquery multiple selector does an OR operation. For instance, $('a.big, a.small') selects a tags with either class big or small). I'm looking for an AND, perhaps like this: $('a').data("artist",{id: 3281, name: "Madonna"}); $('a').data("category","music"); $('a[:category=music && :artist.name=Madonna]'); Lastly, it would be great if comparison operators and regex features were available on data selectors. So $(a[:artist.id>5000]) would be possible. I realize I could probably do much of this using filter(), but it would be nice to have a simple selector format. What solutions are available to do this? Is Jame's Padolsey's the best solution at this time? My concern is primarily in regards to performance, but also in the extra features like sub-property dot-notation and multiple data selectors. Are there other implementations that support these things or are better in some way?

    Read the article

  • Problem in running boost eample blocking_udp_echo_client on MacOSX

    - by n179911
    I am trying to run blocking_udp_echo_client on MacOS X http://www.boost.org/doc/libs/1_35_0/doc/html/boost_asio/example/echo/blocking_udp_echo_client.cpp I run it with argument 'localhost 9000' But the program crashes and this is the line in the source which crashes: `udp::socket s(io_service, udp::endpoint(udp::v4(), 0));' this is the stack trace: #0 0x918c3e42 in __kill #1 0x918c3e34 in kill$UNIX2003 #2 0x9193623a in raise #3 0x91942679 in abort #4 0x940d96f9 in __gnu_debug::_Error_formatter::_M_error #5 0x0000e76e in __gnu_debug::_Safe_iterator::op_base* , __gnu_debug_def::list::op_base*, std::allocator::op_base* ::_Safe_iterator at safe_iterator.h:124 #6 0x00014729 in boost::asio::detail::hash_map::op_base*::bucket_type::bucket_type at hash_map.hpp:277 #7 0x00019e97 in std::_Construct::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_construct.h:81 #8 0x0001a457 in std::__uninitialized_fill_n_aux::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:194 #9 0x0001a4e1 in std::uninitialized_fill_n::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:218 #10 0x0001a509 in std::__uninitialized_fill_n_a::op_base*::bucket_type*, __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type , unsigned long, boost::asio::detail::hash_map::op_base*::bucket_type, boost::asio::detail::hash_map::op_base*::bucket_type at stl_uninitialized.h:310 #11 0x0001aa34 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::_M_fill_insert at vector.tcc:365 #12 0x0001acda in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::insert at stl_vector.h:658 #13 0x0001ad81 in __gnu_norm::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at stl_vector.h:427 #14 0x0001ae3a in __gnu_debug_def::vector::op_base*::bucket_type, std::allocator::op_base*::bucket_type ::resize at vector:169 #15 0x0001b7be in boost::asio::detail::hash_map::op_base*::rehash at hash_map.hpp:221 #16 0x0001bbeb in boost::asio::detail::hash_map::op_base*::hash_map at hash_map.hpp:67 #17 0x0001bc74 in boost::asio::detail::reactor_op_queue::reactor_op_queue at reactor_op_queue.hpp:42 #18 0x0001bd24 in boost::asio::detail::kqueue_reactor::kqueue_reactor at kqueue_reactor.hpp:86 #19 0x0001c000 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #20 0x0001c14d in boost::asio::use_service at io_service.ipp:195 #21 0x0001c26d in boost::asio::detail::reactive_socket_service ::reactive_socket_service at reactive_socket_service.hpp:111 #22 0x0001c344 in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #23 0x0001c491 in boost::asio::use_service at io_service.ipp:195 #24 0x0001c4d5 in boost::asio::datagram_socket_service::datagram_socket_service at datagram_socket_service.hpp:95 #25 0x0001c59e in boost::asio::detail::service_registry::use_service at service_registry.hpp:109 #26 0x0001c6eb in boost::asio::use_service at io_service.ipp:195 #27 0x0001c711 in boost::asio::basic_io_object ::basic_io_object at basic_io_object.hpp:72 #28 0x0001c783 in boost::asio::basic_socket ::basic_socket at basic_socket.hpp:108 #29 0x0001c865 in boost::asio::basic_datagram_socket ::basic_datagram_socket at basic_datagram_socket.hpp:107 #30 0x000027bc in main at main.cpp:32 This is the gdb output: (gdb) continue /Developer/SDKs/MacOSX10.5.sdk/usr/include/c++/4.0.0/debug/safe_iterator.h:127: error: attempt to copy-construct an iterator from a singular iterator. Objects involved in the operation: iterator "this" @ 0x0x100420 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } iterator "other" @ 0x0xbfffe8a4 { type = N11__gnu_debug14_Safe_iteratorIN10__gnu_norm14_List_iteratorISt4pairIiPN5boost4asio6detail16reactor_op_queueIiE7op_baseEEEEN15__gnu_debug_def4listISB_SaISB_EEEEE (mutable iterator); state = singular; } Program received signal: “SIGABRT”. (gdb) continue Program received signal: “?”. Does someone has any idea why this example does not work on mac osx? Thank you.

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Why is numpy's einsum faster than numpy's built in functions?

    - by Ophion
    Lets start with three arrays of dtype=np.double. Timings are performed on a intel CPU using numpy 1.7.1 compiled with icc and linked to intel's mkl. A AMD cpu with numpy 1.6.1 compiled with gcc without mkl was also used to verify the timings. Please note the timings scale nearly linearly with system size and are not due to the small overhead incurred in the numpy functions if statements these difference will show up in microseconds not milliseconds: arr_1D=np.arange(500,dtype=np.double) large_arr_1D=np.arange(100000,dtype=np.double) arr_2D=np.arange(500**2,dtype=np.double).reshape(500,500) arr_3D=np.arange(500**3,dtype=np.double).reshape(500,500,500) First lets look at the np.sum function: np.all(np.sum(arr_3D)==np.einsum('ijk->',arr_3D)) True %timeit np.sum(arr_3D) 10 loops, best of 3: 142 ms per loop %timeit np.einsum('ijk->', arr_3D) 10 loops, best of 3: 70.2 ms per loop Powers: np.allclose(arr_3D*arr_3D*arr_3D,np.einsum('ijk,ijk,ijk->ijk',arr_3D,arr_3D,arr_3D)) True %timeit arr_3D*arr_3D*arr_3D 1 loops, best of 3: 1.32 s per loop %timeit np.einsum('ijk,ijk,ijk->ijk', arr_3D, arr_3D, arr_3D) 1 loops, best of 3: 694 ms per loop Outer product: np.all(np.outer(arr_1D,arr_1D)==np.einsum('i,k->ik',arr_1D,arr_1D)) True %timeit np.outer(arr_1D, arr_1D) 1000 loops, best of 3: 411 us per loop %timeit np.einsum('i,k->ik', arr_1D, arr_1D) 1000 loops, best of 3: 245 us per loop All of the above are twice as fast with np.einsum. These should be apples to apples comparisons as everything is specifically of dtype=np.double. I would expect the speed up in an operation like this: np.allclose(np.sum(arr_2D*arr_3D),np.einsum('ij,oij->',arr_2D,arr_3D)) True %timeit np.sum(arr_2D*arr_3D) 1 loops, best of 3: 813 ms per loop %timeit np.einsum('ij,oij->', arr_2D, arr_3D) 10 loops, best of 3: 85.1 ms per loop Einsum seems to be at least twice as fast for np.inner, np.outer, np.kron, and np.sum regardless of axes selection. The primary exception being np.dot as it calls DGEMM from a BLAS library. So why is np.einsum faster that other numpy functions that are equivalent? The DGEMM case for completeness: np.allclose(np.dot(arr_2D,arr_2D),np.einsum('ij,jk',arr_2D,arr_2D)) True %timeit np.einsum('ij,jk',arr_2D,arr_2D) 10 loops, best of 3: 56.1 ms per loop %timeit np.dot(arr_2D,arr_2D) 100 loops, best of 3: 5.17 ms per loop The leading theory is from @sebergs comment that np.einsum can make use of SSE2, but numpy's ufuncs will not until numpy 1.8 (see the change log). I believe this is the correct answer, but have not been able to confirm it. Some limited proof can be found by changing the dtype of input array and observing speed difference and the fact that not everyone observes the same trends in timings.

    Read the article

  • Why is Oracle using a skip scan for this query?

    - by Jason Baker
    Here's the tkprof output for a query that's running extremely slowly (WARNING: it's long :-) ): SELECT mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn FROM (SELECT /*+ FIRST_ROWS(1) */ mbr_comment_idn, mbr_crt_dt, mbr_data_source, mbr_dol_bl_rmo_ind, mbr_dxcg_ctl_member, mbr_employment_start_dt, mbr_employment_term_dt, mbr_entity_active, mbr_ethnicity_idn, mbr_general_health_status_code, mbr_hand_dominant_code, mbr_hgt_feet, mbr_hgt_inches, mbr_highest_edu_level, mbr_insd_addr_idn, mbr_insd_alt_id, mbr_insd_name, mbr_insd_ssn_tin, mbr_is_smoker, mbr_is_vip, mbr_lmbr_first_name, mbr_lmbr_last_name, mbr_marital_status_cd, mbr_mbr_birth_dt, mbr_mbr_death_dt, mbr_mbr_expired, mbr_mbr_first_name, mbr_mbr_gender_cd, mbr_mbr_idn, mbr_mbr_ins_type, mbr_mbr_isreadonly, mbr_mbr_last_name, mbr_mbr_middle_name, mbr_mbr_name, mbr_mbr_status_idn, mbr_mpi_id, mbr_preferred_am_pm, mbr_preferred_time, mbr_prv_innetwork, mbr_rep_addr_idn, mbr_rep_name, mbr_rp_mbr_id, mbr_same_mbr_ins, mbr_special_needs_cd, mbr_timezone, mbr_upd_dt, mbr_user_idn, mbr_wgt, mbr_work_status_idn, ROWNUM AS ora_rn FROM (SELECT mbr.comment_idn AS mbr_comment_idn, mbr.crt_dt AS mbr_crt_dt, mbr.data_source AS mbr_data_source, mbr.dol_bl_rmo_ind AS mbr_dol_bl_rmo_ind, mbr.dxcg_ctl_member AS mbr_dxcg_ctl_member, mbr.employment_start_dt AS mbr_employment_start_dt, mbr.employment_term_dt AS mbr_employment_term_dt, mbr.entity_active AS mbr_entity_active, mbr.ethnicity_idn AS mbr_ethnicity_idn, mbr.general_health_status_code AS mbr_general_health_status_code, mbr.hand_dominant_code AS mbr_hand_dominant_code, mbr.hgt_feet AS mbr_hgt_feet, mbr.hgt_inches AS mbr_hgt_inches, mbr.highest_edu_level AS mbr_highest_edu_level, mbr.insd_addr_idn AS mbr_insd_addr_idn, mbr.insd_alt_id AS mbr_insd_alt_id, mbr.insd_name AS mbr_insd_name, mbr.insd_ssn_tin AS mbr_insd_ssn_tin, mbr.is_smoker AS mbr_is_smoker, mbr.is_vip AS mbr_is_vip, mbr.lmbr_first_name AS mbr_lmbr_first_name, mbr.lmbr_last_name AS mbr_lmbr_last_name, mbr.marital_status_cd AS mbr_marital_status_cd, mbr.mbr_birth_dt AS mbr_mbr_birth_dt, mbr.mbr_death_dt AS mbr_mbr_death_dt, mbr.mbr_expired AS mbr_mbr_expired, mbr.mbr_first_name AS mbr_mbr_first_name, mbr.mbr_gender_cd AS mbr_mbr_gender_cd, mbr.mbr_idn AS mbr_mbr_idn, mbr.mbr_ins_type AS mbr_mbr_ins_type, mbr.mbr_isreadonly AS mbr_mbr_isreadonly, mbr.mbr_last_name AS mbr_mbr_last_name, mbr.mbr_middle_name AS mbr_mbr_middle_name, mbr.mbr_name AS mbr_mbr_name, mbr.mbr_status_idn AS mbr_mbr_status_idn, mbr.mpi_id AS mbr_mpi_id, mbr.preferred_am_pm AS mbr_preferred_am_pm, mbr.preferred_time AS mbr_preferred_time, mbr.prv_innetwork AS mbr_prv_innetwork, mbr.rep_addr_idn AS mbr_rep_addr_idn, mbr.rep_name AS mbr_rep_name, mbr.rp_mbr_id AS mbr_rp_mbr_id, mbr.same_mbr_ins AS mbr_same_mbr_ins, mbr.special_needs_cd AS mbr_special_needs_cd, mbr.timezone AS mbr_timezone, mbr.upd_dt AS mbr_upd_dt, mbr.user_idn AS mbr_user_idn, mbr.wgt AS mbr_wgt, mbr.work_status_idn AS mbr_work_status_idn FROM mbr JOIN mbr_identfn ON mbr.mbr_idn = mbr_identfn.mbr_idn WHERE mbr_identfn.mbr_idn = mbr.mbr_idn AND mbr_identfn.identfd_type = :identfd_type_1 AND mbr_identfn.identfd_number = :identfd_number_1 AND mbr_identfn.entity_active = :entity_active_1) WHERE ROWNUM <= :ROWNUM_1) WHERE ora_rn > :ora_rn_1 call count cpu elapsed disk query current rows ------- ------ -------- ---------- ---------- ---------- ---------- ---------- Parse 9936 0.46 0.49 0 0 0 0 Execute 9936 0.60 0.59 0 0 0 0 Fetch 9936 329.87 404.00 0 136966922 0 0 ------- ------ -------- ---------- ---------- ---------- ---------- ---------- total 29808 330.94 405.09 0 136966922 0 0 Misses in library cache during parse: 0 Optimizer mode: FIRST_ROWS Parsing user id: 36 (JIVA_DEV) Rows Row Source Operation ------- --------------------------------------------------- 0 VIEW (cr=102 pr=0 pw=0 time=2180 us) 0 COUNT STOPKEY (cr=102 pr=0 pw=0 time=2163 us) 0 NESTED LOOPS (cr=102 pr=0 pw=0 time=2152 us) 0 INDEX SKIP SCAN IDX_MBR_IDENTFN (cr=102 pr=0 pw=0 time=2140 us)(object id 341053) 0 TABLE ACCESS BY INDEX ROWID MBR (cr=0 pr=0 pw=0 time=0 us) 0 INDEX UNIQUE SCAN PK_CLAIMANT (cr=0 pr=0 pw=0 time=0 us)(object id 334044) Rows Execution Plan ------- --------------------------------------------------- 0 SELECT STATEMENT MODE: HINT: FIRST_ROWS 0 VIEW 0 COUNT (STOPKEY) 0 NESTED LOOPS 0 INDEX MODE: ANALYZED (SKIP SCAN) OF 'IDX_MBR_IDENTFN' (INDEX (UNIQUE)) 0 TABLE ACCESS MODE: ANALYZED (BY INDEX ROWID) OF 'MBR' (TABLE) 0 INDEX MODE: ANALYZED (UNIQUE SCAN) OF 'PK_CLAIMANT' (INDEX (UNIQUE)) ******************************************************************************** Based on my reading of Oracle's documentation of skip scans, a skip scan is most useful when the first column of an index has a low number of unique values. The thing is that the first index of this column is a unique primary key. So am I correct in assuming that a skip scan is the wrong thing to do here? Also, what kind of scan should it be doing? Should I do some more hinting for this query? EDIT: I should also point out that the query's where clause uses the columns in IDX_MBR_IDENTFN and no columns other than what's in that index. So as far as I can tell, I'm not skipping any columns.

    Read the article

  • Right edge border unpainted & theme drawing on non client area

    - by CodeVisio
    Basically, the problem concerns border flickering during window resizing on Windows. My first goal was to repositioning controls on a dialog during resizing of it. I think I got a good dynamic repositioning without almost any flickering during this operation, but here I'm talking about main window border flickering. However, I wasn't able to eliminate it at all. To simplify the example try to create a simple win32 app with default code VS provided. I'm testing it on Window 7(64bit) with the default theme (Windows 7 basic, no transparency) and VS2008. 1) Do not add code to the app. 2) run it in debug mode. 3) Drag the left edge of the window slowly toward the left of the screen and at the same time keep an eye on the right edge border of the window. You should see a redrawing taking in action. 4) Repeat step 3 moving rapidly the mouse, you should see the flickering on the right edge more clearly. If you invert the edge, that is moving the right edge of the window, then the left edge stay firmly there without unpainted regions. The same process happens for the top edge border vs. the bottom one. Now, enable he Classic Theme (that similar to Win2000) and repeat again the steps above. The right edge is perfectly there without flickering at all! If you keep an eye in the Output Window of Visual Studio when you run in debug mode you should see a list of dll loaded together with your exe. If you run in debug mode with the default theme you will see uxtheme.dll loaded. On the contrary, with classic theme enabled the uxtheme.dll is not loaded (dwmapi.dll is always loaded). Probably uxtheme.dll is loaded at runtime, based on desktop settings and it takes in action for redrawing your windows non-client area. Another trick you could use to see the effect of this flickering is to add a case for WM_NCPAINT and return 0 instead to call the DefWindowProc(). Repeating the steps above and moving fast you should see a big part of the right edge of the window completely erased by background windows. This doesn't happen for the top and bottom ones. Any idea to resolve this flickering? Thank you!

    Read the article

  • Efficiency of data structures in C99 (possibly affected by endianness)

    - by Ninefingers
    Hi All, I have a couple of questions that are all inter-related. Basically, in the algorithm I am implementing a word w is defined as four bytes, so it can be contained whole in a uint32_t. However, during the operation of the algorithm I often need to access the various parts of the word. Now, I can do this in two ways: uint32_t w = 0x11223344; uint8_t a = (w & 0xff000000) >> 24; uint8_t b = (w & 0x00ff0000) >> 16; uint8_t b = (w & 0x0000ff00) >> 8; uint8_t d = (w & 0x000000ff); However, part of me thinks that isn't particularly efficient. I thought a better way would be to use union representation like so: typedef union { struct { uint8_t d; uint8_t c; uint8_t b; uint8_t a; }; uint32_t n; } word32; Using this method I can assign word32 w = 0x11223344; then I can access the various parts as I require (w.a=11 in little endian). However, at this stage I come up against endianness issues, namely, in big endian systems my struct is defined incorrectly so I need to re-order the word prior to it being passed in. This I can do without too much difficulty. My question is, then, is the first part (various bitwise ands and shifts) efficient compared to the implementation using a union? Is there any difference between the two generally? Which way should I go on a modern, x86_64 processor? Is endianness just a red herring here? I could inspect the assembly output of course, but my knowledge of compilers is not brilliant. I would have thought a union would be more efficient as it would essentially convert to memory offsets, like so: mov eax, [r9+8] Would a compiler realise that is what happening in the bit-shift case above? If it matters, I'm using C99, specifically my compiler is clang (llvm). Thanks in advance.

    Read the article

  • Symfony: embedRelation() controlling options for nesting multiple levels of relations

    - by wulftone
    Hey all, I'm trying to set some conditional statements for nested embedRelation() instances, and can't find a way to get any kind of option through to the second embedRelation. I've got a "Measure-Page-Question" table relationship, and I'd like to be able to choose whether or not to display the Question table. For example, say I have two "success" pages, page1Success.php and page2Success.php. On page1, I'd like to display "Measure-Page-Question", and on page2, I'd like to display "Measure-Page", but I need a way to pass an "option" to the PageForm.class.php file to make that kind of decision. My actions.class.php file has something like this: // actions.class.php $this-form = new measureForm($measure, array('option'=$option)); to pass an option to the "Page", but passing that option through "Page" into "Question" doesn't work. My measureForm.class.php file has an embedRelation in it that is dependent on the "option": // measureForm.class.php if ($this-getOption('option') == "page_1") { $this-embedRelation('Page'); } and this is what i'd like to do in my pageForm.class.php file: // pageForm.class.php if ($this-getOption('option') == "page_1") { // Or != "page_2", or whatever $this-embedRelation('Question'); } I can't seem to find a way to do this. Any ideas? Is there a preferred Symfony way of doing this type of operation, perhaps without embedRelation? Thanks, -Trevor As requested, here's my schema.yml: # schema.yml Measure: connection: doctrine tableName: measure columns: _kp_mid: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true description: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false frequency: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Page: local: _kp_mid foreign: _kf_mid type: many Page: connection: doctrine tableName: page columns: _kp_pid: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true _kf_mid: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false next: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false number: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false previous: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Measure: local: _kf_mid foreign: _kp_mid type: one Question: local: _kp_pid foreign: _kf_pid type: many Question: connection: doctrine tableName: question columns: _kp_qid: type: integer(4) fixed: false unsigned: false primary: true autoincrement: true _kf_pid: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false text: type: string() fixed: false unsigned: false primary: false notnull: false autoincrement: false type: type: integer(4) fixed: false unsigned: false primary: false notnull: false autoincrement: false relations: Page: local: _kf_pid foreign: _kp_pid type: one

    Read the article

  • memcpy segmentation fault on linux but not os x

    - by Andre
    I'm working on implementing a log based file system for a file as a class project. I have a good amount of it working on my 64 bit OS X laptop, but when I try to run the code on the CS department's 32 bit linux machines, I get a seg fault. The API we're given allows writing DISK_SECTOR_SIZE (512) bytes at a time. Our log record consists of the 512 bytes the user wants to write as well as some metadata (which sector he wants to write to, the type of operation, etc). All in all, the size of the "record" object is 528 bytes, which means each log record spans 2 sectors on the disk. The first record writes 0-512 on sector 0, and 0-15 on sector 1. The second record writes 16-512 on sector 1, and 0-31 on sector 2. The third record writes 32-512 on sector 2, and 0-47 on sector 3. ETC. So what I do is read the two sectors I'll be modifying into 2 freshly allocated buffers, copy starting at record into buf1+the calculated offset for 512-offset bytes. This works correctly on both machines. However, the second memcpy fails. Specifically, "record+DISK_SECTOR_SIZE-offset" in the below code segfaults, but only on the linux machine. Running some random tests, it gets more curious. The linux machine reports sizeof(Record) to be 528. Therefore, if I tried to memcpy from record+500 into buf for 1 byte, it shouldn't have a problem. In fact, the biggest offset I can get from record is 254. That is, memcpy(buf1, record+254, 1) works, but memcpy(buf1, record+255, 1) segfaults. Does anyone know what I'm missing? Record *record = malloc(sizeof(Record)); record->tid = tid; record->opType = OP_WRITE; record->opArg = sector; int i; for (i = 0; i < DISK_SECTOR_SIZE; i++) { record->data[i] = buf[i]; // *buf is passed into this function } char* buf1 = malloc(DISK_SECTOR_SIZE); char* buf2 = malloc(DISK_SECTOR_SIZE); d_read(ad->disk, ad->curLogSector, buf1); d_read(ad->disk, ad->curLogSector+1, buf2); memcpy(buf1+offset, record, DISK_SECTOR_SIZE-offset); memcpy(buf2, record+DISK_SECTOR_SIZE-offset, offset+sizeof(Record)-sizeof(record->data));

    Read the article

  • Am I Leaking ADO.NET Connections?

    - by HardCode
    Here is an example of my code in a DAL. All calls to the database's Stored Procedures are structured this way, and there is no in-line SQL. Friend Shared Function Save(ByVal s As MyClass) As Boolean Dim cn As SqlClient.SqlConnection = Dal.Connections.MyAppConnection Dim cmd As New SqlClient.SqlCommand Try cmd.Connection = cn cmd.CommandType = CommandType.StoredProcedure cmd.CommandText = "proc_save_my_class" cmd.Parameters.AddWithValue("@param1", s.Foo) cmd.Parameters.AddWithValue("@param2", s.Bar) Return True Finally Dal.Utility.CleanupAdoObjects(cmd, cn) End Try End Function Here is the Connection factory (if I am using the correct term): Friend Shared Function MyAppConnection() As SqlClient.SqlConnection Dim cn As New SqlClient.SqlConnection(ConfigurationManager.ConnectionStrings("MyConnectionString").ToString) cn.Open() If cn.State <> ConnectionState.Open Then ' CriticalException is a custom object inheriting from Exception. Throw New CriticalException("Could not connect to the database.") Else Return cn End If End Function Here is the Dal.Utility.CleaupAdoObjects() function: Friend Shared Sub CleanupAdoObjects(ByVal cmd As SqlCommand, ByVal cn As SqlConnection) If cmd IsNot Nothing Then cmd.Dispose() If cn IsNot Nothing AndAlso cn.State <> ConnectionState.Closed Then cn.Close() End Sub I am getting a lot of "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." error messages reported by the users. The application's DAL opens a connection, reads or saves data, and closes it. No connections are ever left open - intentionally! There is nothing obvious on the Windows 2000 Server hosting the SQL Server 2000 that would indicate a problem. Nothing in the Event Logs and nothing in the SQL Server logs. The timeouts happen randomly - I cannot reproduce. It happens early in the day with only 1 to 5 users in the system. It also happens with around 50 users in the system. The most connections to SQL Server via Performance Monitor, for all databases, has been about 74. The timeouts happen in code that both saves to, and reads from, the database in different parts of the application. The stack trace does not point to one or two offending DAL functions. It's happened in many different places. Does my ADO.NET code appear to be able to leak connections? I've goolged around a bit, and I've read that if the connection pool fills up, this can happen. However, I'm not explicitly setting any connection pooling. I've even tried to increase the Connection Timeout in the connection string, but timeouts happen long before the 300 second (5 minute) value: <add name="MyConnectionString" connectionString="Data Source=MyServer;Initial Catalog=MyDatabase;Integrated Security=SSPI;Connection Timeout=300;"/> I'm at a total loss already as to what is causing these Timeout issues. Any ideas are appreciated.

    Read the article

< Previous Page | 186 187 188 189 190 191 192 193 194 195 196 197  | Next Page >