Search Results

Search found 5262 results on 211 pages for 'operation'.

Page 175/211 | < Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >

  • How to find which file is open in eclipse editor without using IEditorPart?

    - by Destructor
    I want to know which file (or even project is enough) is opened in eclipse editor? I know we can do this once we get IEditorPart from doSetInput method, IFile file = ((IFileEditorInput) iEditorPart).getFile(); But I want the name of file without using IEditorPart, how can I do the same? Checking which is the selected file in project explorer is not of much help because, user can select multiple files at once and open all simultaneously and I did not way to distinguish which file opened at what time. Adding more info: I have an editor specified for a particular type of file, now every time it opens, during intializing editor I have some operation to do based on project properties. While initializing editor, I need the file handle (of the one which user opened/double clicked) or the corresponding project handle. I have my editor something this way: public class MyEditor extends TextEditor{ @Override protected void initializeEditor() { setSourceViewerConfiguration(new MySourceViewerConfiguration( CDTUITools.getColorManager(), store, "MyPartitions", this)); } //other required methods @Override protected void doSetInput(IEditorInput input) throws CoreException { if(input instanceof IFileEditorInput) { IFile file = ((IFileEditorInput) input).getFile(); } } } as I have done in the doSetInput() method , I want the file handle(even project handle is sufficient). But the problem is in initializeEditor() function there is no reference to editorInput, hence I am unable to get the file handle. In the source viewer configuration file, I set the code scanners and this needs some project specific information that will set the corresponding rules.

    Read the article

  • Access denied from another thread

    - by Lobuno
    Hello! In a program I span a thread ("the working thread"). Hera I copy some files write some data to a database and eventually, delete some other files or directories. Everything works fine. The problem is now, that I decided to move the deleting operation to some other thread. So the working thread now copies the files or directories, writes to the database, and , if there is a need to delete some other files this thread spans another thread and that second thread deleted the needed files or directories. The problem is that,the deletion used to work 100% when done in the working thread, now when the same is done in the secondary thread, I sometimes get an "Access denied" error and the files cannot be deleted. And no, the working thread is definitely NOT acceding the files and directories to delete at this moment. Sometimes (but not always) the main thread is impersonating some user, so if needed , the deleting thread is also running under impersonation just to grant the needed permissions to delete the files, so that should not be the problem. Anybody has a clue why this could be happening?

    Read the article

  • Java - Removing duplicates in an ArrayList

    - by Will
    I'm working on a program that uses an ArrayList to store Strings. The program prompts the user with a menu and allows the user to choose an operation to perform. Such operations are adding Strings to the List, printing the entries etc. What I want to be able to do is create a method called removeDuplicates().This method will search the ArrayList and remove any duplicated values. I want to leave one instance of the duplicated value(s) within the list. I also want this method to return the total number of duplicates removed. I've been trying to use nested loops to accomplish this but I've been running into trouble because when entries get deleted, the indexing of the ArrayList gets altered and things don't work as they should. I know conceptually what I need to do but I'm having trouble implementing this idea in code. Here is some pseudo code: start with first entry; check each subsequent entry in the list and see if it matches the first entry; remove each subsequent entry in the list that matches the first entry; after all entries have been examined, move on to the second entry; check each entry in the list and see if it matches the second entry; remove each entry in the list that matches the second entry; repeat for entry in the list Here's the code I have so far: public int removeDuplicates() { int duplicates = 0; for ( int i = 0; i < strings.size(); i++ ) { for ( int j = 0; j < strings.size(); j++ ) { if ( i == j ) { // i & j refer to same entry so do nothing } else if ( strings.get( j ).equals( strings.get( i ) ) ) { strings.remove( j ); duplicates++; } } } return duplicates; }

    Read the article

  • 2-Way databinding with Entity Framework and WPF DataGrid , is Possible ?

    - by Panindra
    i am working on POS application using SQL CE , WPF , Entity framework 3.5sp2 and iam trying to use data grid as my Order Entry Control for User to enter Products Order . Iam plannning to bind this to enitiy frmae work model , abd looking for 2 way updating ? private void button1_Click(object sender, RoutedEventArgs e) { using (MasterEntities nwEntities = new MasterEntities()) { var users = from d in nwEntities.Companies select new { d.CompanyId, d.CompanyName, d.Place }; listBox1.DataContext = users; dataGrid1.DataContext = users; // foreach (String c in customers) // { // MessageBox.Show(c.ToString()); // } } } When try to double clikc on the datagrid it through s a error with Caption " Invalid Operation Execption was unhandled " and Message " A TwoWay or OneWayToSource binding cannot work on the read-only property 'CompanyId' of type '<f__AnonymousType0`3[System.Int32,System.String,System.String]'. whats wrong here and my xaml coding goes like this <Grid> <ListBox Name="listBox1" ItemsSource="{Binding}" /> <Button Content="Show " Name="button1" Click="button1_Click" /> <DataGrid AutoGenerateColumns="False" Name="dataGrid1" ItemsSource="{Binding}" > <DataGrid.Columns> <DataGridTextColumn Header=" ID" Binding="{Binding CompanyId}"/> <DataGridTextColumn Header="Company Name" Binding="{Binding CompanyName}"/> <DataGridTextColumn Header="Place" Binding="{Binding Place}" /> </DataGrid.Columns> </DataGrid> </Grid> EDITED : i made the changes shown by @vorrtex, But, then i added another button to save the chages and in button click event i added follwing code , butit showing Updating error private void button2_Click(object sender, RoutedEventArgs e) { nwEntities.SaveChanges(); }

    Read the article

  • What is the fastest way to filter a list of strings when making an Intellisense/Autocomplete list?

    - by user559548
    Hello everyone, I'm writing an Intellisense/Autocomplete like the one you find in Visual Studio. It's all fine up until when the list contains probably 2000+ items. I'm using a simple LINQ statement for doing the filtering: var filterCollection = from s in listCollection where s.FilterValue.IndexOf(currentWord, StringComparison.OrdinalIgnoreCase) >= 0 orderby s.FilterValue select s; I then assign this collection to a WPF Listbox's ItemSource, and that's the end of it, works fine. Noting that, the Listbox is also virtualised as well, so there will only be at most 7-8 visual elements in memory and in the visual tree. However the caveat right now is that, when the user types extremely fast in the richtextbox, and on every key up I execute the filtering + binding, there's this semi-race condition, or out of sync filtering, like the first key stroke's filtering could still be doing it's filtering or binding work, while the fourth key stroke is also doing the same. I know I could put in a delay before applying the filter, but I'm trying to achieve a seamless filtering much like the one in Visual Studio. I'm not sure where my problem exactly lies, so I'm also attributing it to IndexOf's string operation, or perhaps my list of string's could be optimised in some kind of index, that could speed up searching. Any suggestions of code samples are much welcomed. Thanks.

    Read the article

  • Can someone confirm how Microsoft Excel 2007 internally represents numbers?

    - by Jon
    I know the IEEE 754 floating point standard by heart as I had to learn it for an exam. I know exactly how floating point numbers are used and the problems that they can have. I can manually do any operation on the binary representation of floating point numbers. However, I have not found a single source which unambiguously states that excel uses 64 bit floating point numbers to internally represent every single cell "type" in excel except for text. I have no idea whether some of the types use signed or unsigned integers and some use 64 bit floating point. I have found literally trillions of articles which 1) describe floating point numbers and then 2) talk about being careful with excel because of floating point numbers. I have not found a single statement saying "all types are 64 bit floating point numbers except text". I have not found a single statement which says "changing the type of a cell only changes its visual representation and not its internal representation, unless you change the type from text to some other type which is not text or you change some other type which is not text to text". This is literally all I want to know, and it's so simple and axiomatic that I am amazed that I can find trillions of articles and pages which talk around these statements but do not state them directly.

    Read the article

  • named type not used for constructor injection

    - by nmarun
    Hi, I have a simple console application where I have the following setup: public interface ILogger { void Log(string message); } class NullLogger : ILogger { private readonly string version; public NullLogger() { version = "1.0"; } public NullLogger(string v) { version = v; } public void Log(string message) { Console.WriteLine("NULL " + version + " : " + message); } } The configuration details are below: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole"> My calling code looks as below: IUnityContainer container = new UnityContainer(); UnityConfigurationSection section = (UnityConfigurationSection)ConfigurationManager.GetSection("unity"); section.Containers.Default.Configure(container); ILogger nullLogger = container.Resolve(); nullLogger.Log("hello"); This works fine, but once I give a name to this type something like: <type type="UnityConsole.ILogger, UnityConsole" mapTo="UnityConsole.NullLogger, UnityConsole" name="NullLogger"> The above calling code does not work even if I explicitly register the type using container.RegisterType<ILogger, NullLogger>(); I get the error: {"Resolution of the dependency failed, type = \"UnityConsole.ILogger\", name = \"\". Exception message is: The current build operation (build key Build Key[UnityConsole.NullLogger, null]) failed: The parameter v could not be resolved when attempting to call constructor UnityConsole.NullLogger(System.String v). (Strategy type BuildPlanStrategy, index 3)"} Why doesn't unity look into named instances? To get it to work, I'll have to do: ILogger nullLogger = container.Resolve("NullLogger"); Where is this behavior documented? Arun

    Read the article

  • Is it possible in Perl to require a subroutine call is made?

    - by MitchelWB
    I don't know enough about Perl to even know what I'm asking for exactly, but I'm writing a series of subroutines to be available for many individual scripts that all process different incoming flat files. The process is far from perfect, but it's what I've got to deal with and I'm trying to build myself a small library of subs that make it easier for me to manage it all. Each script handles a different incoming flat file with it's own formatting, sorting, grouping and outputting requirements. One common aspect is that we have small text files that house counters that are used to name the output files so that we have no duplicate file names. Because the processing of the data is different for each file, I need to open the file to get my counter value, because this is a common operation, I'd like to put it in a sub to retrieve the counter. But then need to write specific code to process the data. And would like a second sub that allows me to update the counter with the counter once I've processed the data. Is there a way to make the second sub call a requirement if the first one is called? Ideally if it could even be an error that would prevent the script from running at all much like a syntax error. EDIT: Here is a little [ugly and simplified] psuedo-code to give a better feel for what the current process is: require "importLibrary.plx"; #open data source file open DataIn, $filename; #call getCounterInfo from importLibrary.plx to get the counter value from counter file $counter = &getCounterInfo($counterFileName); while (<DataIn>) { #Process data based on unique formatting and requirements #output to task files based on requirements and name files using the $counter #increment $counter } #update counter file with new value of $counter &updateCounterInfo($counter);

    Read the article

  • Generic <T> how cast ?

    - by Kris-I
    Hi, I have a "Product" base class, some other classes "ProductBookDetail","ProductDVDDetail" inherit from this class. I use a ProductService class to make operation on these classes. But, I have to do some check depending of the type (ISBN for Book, languages for DVD). I'd like to know the best way to cast "productDetail" value, I receive in SaveOrupdate. I tried GetType() and cast with (ProductBookDetail)productDetail but that's not work. Thanks, var productDetail = new ProductDetailBook() { .... }; var service = IoC.Resolve<IProductServiceGeneric<ProductDetailBook>>(); service.SaveOrUpdate(productDetail); var productDetail = new ProductDetailDVD() { .... }; var service = IoC.Resolve<IProductServiceGeneric<ProductDetailDVD>>(); service.SaveOrUpdate(productDetail); public class ProductServiceGeneric<T> : IProductServiceGeneric<T> { private readonly ISession _session; private readonly IProductRepoGeneric<T> _repo; public ProductServiceGeneric() { _session = UnitOfWork.CurrentSession; _repo = IoC.Resolve<IProductRepoGeneric<T>>(); } public void SaveOrUpdate(T productDetail) { using (ITransaction tx = _session.BeginTransaction()) { //here i'd like ot know the type and access properties depending of the class _repo.SaveOrUpdate(productDetail); tx.Commit(); } } }

    Read the article

  • Sync two SqlExpress using NHibernate

    - by Christian
    Hello, I am creating a simple project management system which uses NHibernate for object storage. The underlying database is SQL express (at least currently for development). The client runs on either the desktop or laptop. I know I could use web-services and store the DB only on the desktop, but this would force the desktop to be available all the time. I am currently thinking about duplicating the DB, having two instances with "different data". To clarify, we are not talking about a productive app here, its a prototype. One way to achieve this very simple would be the following process: Client: Check if desktop DB is available (through web service) Client: If yes, use desktop storage, no problem here Client: If not, use own DB as storage Client: Poll desktop regulary, as soon as it comes on, sync Client: Switch to desktop storage ... Desktop: Do not attempt any DB operation before checking for required sync Desktop: If sync needed, do it... My question is now, how would you sync? Assume 4 or 5 types of objects, all have GUID as identifiers. Would you always manually "lazy load" all objects of a certain type and feed them to the DB. Would you always drop the whole desktop DB in case the client DB may be newer and out of sync? Again, I want to stress out, I am not assuming any conflicts or stale data, I basically just want to "copy the whole DB from the client". Would you use NHibernate for this? Or would you separate the copy process? When I think about it, my questions comes down to this: Is there any function from NHibernate: SyncDBs_SourceWins_(SourceDB, TargetDB) Thanks for help, Chris

    Read the article

  • Performing a SVD on tweets. Memory problem

    - by plotti
    I have generated a huge csv file as an output from my pos tagging and stemming. It looks like this: word1, word2, word3, ..., word14400 person1 1 2 0 1 person2 0 0 1 0 ... person650 It contains the word counts for each person. Like this I am getting characteristic vectors for each person. I want to run a SVD on this beast, but it seems the matrix is too big to be held in memory to perform the operation. My quesion is: should i reduce the column size by removing words which have a column sum of for example 1, which means that they have been used only once. Do I bias the data too much with this attempt? I tried the rapidminer attempt, by loading the csv into the db. and then sequentially reading it in with batches for processing, like rapidminer proposes. But Mysql can't store that many columns in a table. If i transpose the data, and then retranspose it on import it also takes ages.... -- So in general I am asking for advice how to perform a svd on such a corpus.

    Read the article

  • Create a new webpage using a WCF service

    - by sweeney
    Hello, I'd like to create WCF operation contract which consumes a string, produces a public webpage and then returns the address of this new page. [ServiceContract(Namespace = "")] public class DatabaseService { [OperationContract] public string BuildPage(string fileName, string html) { //writes html to file //returns public file url } } This doesn't seem like it should be complicated but i cant figure out how to do it. So far what i've tried is this: [OperationContract] public string PrintToFile(string name, string text) { FileInfo f = new FileInfo(name); StreamWriter w = f.AppendText(); w.Write(text); w.Close(); return f.Directory.ToString(); } Here's the problem. This does not create a file in the web root, it creates it in the directory where the webdav server is running. When i run this on an IIS server it seems to do nothing at all (at least not that i can tell). How can I get a handle to the webroot programmatically so that i can place the resultant file there and then return the public URL? I can tack on the domain name after the fact without issue so if it only returns the relative path to the file that's fine. Thanks, brian

    Read the article

  • Message sent to deallocated instance which has never been released

    - by Jakub
    Hello, I started dealing with NSOperations and (as usual with concurrency) I'm observing strange behaviour. In my class I've got an instance variable: NSMutableArray *postResultsArray; when one button in the UI is pressed I initialize the array: postResultsArray = [NSMutableArray array]; and setup the operations (together with dependencies). In the operations I create a custom object and try to add to the array: PostResult *result = [[PostResult alloc] initWithServiceName:@"Sth" andResult:someResult]; [self.postResultsArray addObject:result]; and while adding I get: -[CFArray retain]: message sent to deallocated instance 0x3b40c30 which is strange as I don't release the array anywhere in my code (I did, but when the problem started to appear I commented all the release operations to be sure that they are not the case). I also used to have @synchronized section like below: PostResult *result = [[PostResult alloc] initWithServiceName:@"Sth" andResult:someResult]; @synchronized (self.postResultsArray) { [self.postResultsArray addObject:result]; } but the problem was the same (however, the error was for the synchronized operation). Any ideas what I may be doing wrong?

    Read the article

  • Java Function Analysis

    - by khan
    Okay..I am a total Python guy and have very rarely worked with Java and its methods. The condition is that I have a got a Java function that I have to explain to my instructor and I have got no clue about how to do so..so if one of you can read this properly, kindly help me out in breaking it down and explaining it. Also, i need to find out any flaw in its operation (i.e. usage of loops, etc.) if there is any. Finally, what is the difference between 'string' and 'string[]' types? public static void search(String findfrom, String[] thething){ if(thething.length > 5){ System.err.println("The thing is quite long"); } else{ int[] rescount = new int[thething.length]; for(int i = 0; i < thething.length; i++){ String[] characs = findfrom.split("[ \"\'\t\n\b\f\r]", 0); for(int j = 0; j < characs.length; j++){ if(characs[j].compareTo(thething[i]) == 0){ rescount[i]++; } } } for (int j = 0; j < thething.length; j++) { System.out.println(thething[j] + ": " + rescount[j]); } } }

    Read the article

  • Diff/Merge functionality for objects (not files!)

    - by gehho
    I have a collection of objects of the same type, let's call it DataItem. The user can view and edit these items in an editor. It should also be possible to compare and merge different items, i.e. some sort of diff/merge for DataItem instances. The DIFF functionality should compare all (relevant) properties/fields of the items and detect possible differences. The MERGE functionality should then be able to merge two instances by applying selected differences to one of the items. For example (pseudo objects): DataItem1 { DataItem2 { Prop1 = 10 Prop1 = 10 Prop2 = 25 Prop2 = 13 Prop3 = 0 Prop3 = 5 Coll = { 7, 4, 8 } Coll = { 7, 4, 8, 12 } } } Now, the user should be provided with a list of differences (i.e. Prop2, Prop3, and Coll) and he should be able to select which differences he wants to eliminate by assigning the value from one item to the other. He should also be able to choose if he wants to assign the value from DataItem1 to DataItem2 or vice versa. Are there common practices which should be used to implement this functionality? Since the same editor should also provide undo/redo functionality (using the Command pattern), I was thinking about reusing the ICommand implementations because both scenarios basically handle with property assignments, collection changes, and so on... My idea was to create Difference objects with ICommand properties which can be used to perform a merge operation for this specific Difference. Btw: The programming language will be C# with .NET 3.5SP1/4.0. However, I think this is more of a language-independent question. Any design pattern/idea/whatsoever is welcome!

    Read the article

  • Simplest way to mix sequences of types with iostreams?

    - by Kylotan
    I have a function void write<typename T>(const T&) which is implemented in terms of writing the T object to an ostream, and a matching function T read<typename T>() that reads a T from an istream. I am basically using iostreams as a plain text serialisation format, which obviously works fine for most built-in types, although I'm not sure how to effectively handle std::strings just yet. I'd like to be able to write out a sequence of objects too, eg void write<typename T>(const std::vector<T>&) or an iterator based equivalent (although in practice, it would always be used with a vector). However, while writing an overload that iterates over the elements and writes them out is easy enough to do, this doesn't add enough information to allow the matching read operation to know how each element is delimited, which is essentially the same problem that I have with a single std::string. Is there a single approach that can work for all basic types and std::string? Or perhaps I can get away with 2 overloads, one for numerical types, and one for strings? (Either using different delimiters or the string using a delimiter escaping mechanism, perhaps.)

    Read the article

  • how does MySQL implement the "group by"?

    - by user188916
    I read from the MySQL Reference Manual and find that when it can take use of index,it just do index scan,other it will create tmp tables and do things like filesort. And I also read from other article that the "Group By" result will sort by group by columns by default,if "order by null" clause added,it won't don filesort. The difference can be found from the "explain ..." clause. so my problem is:what is the difference between "group by" clause that with "order by null" and which doesn't have? I try to use profiling to see what mysql do on the background,and only see result like: result for group clause without order by null: |preparing | 0.000016 | | Creating tmp table | 0.000048 | | executing | 0.000009 | | Copying to tmp table | 0.000109 | **| Sorting result | 0.000023 |** | Sending data | 0.000027 | result for clause with "order by null": preparing | 0.000016 | | Creating tmp table | 0.000052 | | executing | 0.000009 | | Copying to tmp table | 0.000114 | | Sending data | 0.000028 | So I guess what MySQL do when the "order by null" added,it does not use filesort algorithm,maybe when it creates the tmp table,it uses index as well,and then use the index to do group by operation,when completed,it just read result from the table rows and does not sort the result. But my original opinion is that MySQL can use quicksort to sort the items and then do group by,so the result will be sorted as well. Any opinion appreciated,thanks.

    Read the article

  • IIS doesn't send two responses to the same client at the same time (only for ASP)

    - by dr. evil
    I've got 2 ASP pages. I do a request to the first page from Firefox (which takes 30 seconds to process on server-side), and during the execution of 30 seconds I do another request from Firefox to the second page (takes less than 1 second in server-side), but it does come after 31 second. Because it waits first requests to finish. When I request to the first page from Firefox and then request the second page from IE it's just instant. So basically ASP - IIS 6 somehow limiting every client to one request (long processing request) at a time. I need to get around this problem in my .NET client application. This is tested in 3 different systems. If you want to test you can try the ASP scripts at the end. This behaviour is same in a long SQL execution or just in a time consuming ASP operation. Note: It's not about HTTP Keep Alive It's not about persistent connection limit (we tried to increase this in firefox and in .NET with Net.ServicePointManager.DefaultConnectionLimit) It's not about User Agent This doesn't happen in ASP.NET so I assume it's something to the with ASP.dll I'm trying to solve this on the client not the server. I don't have direct control over the server it's a 3rd party solution. Is there any way to get around this? Sample ASP Code: First ASP: <% Set cnn = Server.CreateObject("Adodb.Connection") cnn.Open "Provider=sqloledb;Data Source=.;Initial Catalog=master;User Id=sa;Password=;" cnn.Execute("WAITFOR DELAY '0:0:30'") cnn.Close %> Second ASP: <% Response.Write "bla bla" %>

    Read the article

  • nhibernate mapping: delete collection, insert new collection with old IDs

    - by npeBeg
    my issue lokks similar to this one: (link) but i have one-to-many association: <set name="Fields" cascade="all-delete-orphan" lazy="false" inverse="true"> <key column="[TEMPLATE_ID]"></key> <one-to-many class="MyNamespace.Field, MyLibrary"/> </set> (i also tried to use ) this mapping is for Template object. this one and the Field object has their ID generators set to identity. so when i call session.Update for the Template object it works fine, well, almost: if the Field object has an Id number, UPDATE sql request is called, if the Id is 0, the INSERT is performed. But if i delete a Field object from the collection it has no effect for the Database. I found that if i also call session.Delete for this Field object, everything will be ok, but due to client-server architecture i don't know what to delete. so i decided to delete all the collection elements from the DB and call session.Update with a new collection. and i've got an issue: nhibernate performs the UPDATE operation for the Field objects that has non-zero Id, but they are removed from DB! maybe i should use some other Id generator or smth.. what is the best way to make nhibernate perform "delete all"/"insert all" routine for the collection?

    Read the article

  • Nose2 multiprocess error on Windows7

    - by tt293
    I was looking into nose2 as a way to get around the restrictions of having both xunit output and multiprocessing in nose1.3. However, when always-on is set to False in the [multiprocess] section, I can only get a single process running, while when running with always-on set to True, I get the following error: ---------------------------------------------------------------------- Ran 0 tests in 0.043s OK Traceback (most recent call last): File "C:\dev\testing\Tests\PythonTests\venv\Scripts\nose2-script.py", line 8, in <module> load_entry_point('nose2==0.4.7', 'console_scripts', 'nose2')() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 284, in discover return main(*args, **kwargs) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 98, in __init__ super(PluggableTestProgram, self).__init__(**kw) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\unittest2-0.5.1- py2.7.egg\unittest2\main.py", line 98, in __init__ self.runTests() File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\main.py", line 260, in runTests self.result = runner.run(self.test) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\runner.py", line 53, in run executor(test, result) File "C:\dev\testing\Tests\PythonTests\venv\lib\site-packages\nose2-0.4.7-py2. 7.egg\nose2\plugins\mp.py", line 60, in _runmp ready, _, _ = select.select(rdrs, [], [], self.testRunTimeout) select.error: (10038, 'An operation was attempted on something that is not a soc ket') This is running python 2.7.5 (32bit) on Windows 7 in a virtualenv with six-1.1.0, unittest2-0.5.1 and nose2-0.4.7 (I get the same behavior outside of the venv, so I don't think that is the issue here).

    Read the article

  • Help! Getting an error copying the data from one column to the same column in a similar recordset..

    - by Mike D
    I have a routine which reads one recordset, and adds/updates rows in a similar recordset. The routine starts off by copying the columns to a new recordset: Here's the code for creating the new recordset.. For X = 1 To aRS.Fields.Count mRS.Fields.Append aRS.Fields(X - 1).Name, aRS.Fields(X - 1).Type, aRS.Fields(X - _ 1).DefinedSize, aRS.Fields(X - 1).Attributes Next X Pretty straight forward. Notice the copying of the name, Type, DefinedSize & Attributes... Further down in the code, (and there's nothing that modifies any of the columns between.. ) I'm copying the values of a row to a row in the new recordset as such: For C = 1 To aRS.Fields.Count mRS.Fields(C - 1) = aRS.Fields(C - 1) Next C When it gets to the last column which is a numeric, it craps with the "Mutliple-Step Operation Generated an error" message. I know that MS says this is an error generated by the provider, which in this case is ADO 2.8. There is no open connect to the DB at this point in time either. I'm pulling what little hair I have left over this one... (and I don't really care at this point that the column index is 'X' in one loop & 'C' in the other... I'll change it later when I get the real problem fixed...)

    Read the article

  • Not delete params in URL, when sorting [ RAILS 3 ]

    - by kamil
    I have sortable table columns, made like that http://asciicasts.com/episodes/228-sortable-table-columns And I have simply filter options for two columns in table, made at select_tag (GET method). This two function don't work together. When I change filter, the sort parameter disappear and inversely. <th><%= sortable "Id" %></th> <th> Status<br/> <form method="get"> <%= select_tag(:status, options_for_select([['All', 'all']]+@statuses, params[:status]),{:onchange => 'this.form.submit()'}) %> </th> <th><%= sortable "Operation" %></th> <th> Processor<br/> <%= select_tag(:processor, options_for_select([['All', 'all']]+@processor_names, params[:processor]),{:onchange => 'this.form.submit()'}) %> </form> </th>

    Read the article

  • Is memory allocation in linux non-blocking?

    - by Mark
    I am curious to know if the allocating memory using a default new operator is a non-blocking operation. e.g. struct Node { int a,b; }; ... Node foo = new Node(); If multiple threads tried to create a new Node and if one of them was suspended by the OS in the middle of allocation, would it block other threads from making progress? The reason why I ask is because I had a concurrent data structure that created new nodes. I then modified the algorithm to recycle the nodes. The throughput performance of the two algorithms was virtually identical on a 24 core machine. However, I then created an interference program that ran on all the system cores in order to create as much OS pre-emption as possible. The throughput performance of the algorithm that created new nodes decreased by a factor of 5 relative the the algorithm that recycled nodes. I'm curious to know why this would occur. Thanks. *Edit : pointing me to the code for the c++ memory allocator for linux would be helpful as well. I tried looking before posting this question, but had trouble finding it.

    Read the article

  • How to decide between using PLINQ and LINQ at runtime?

    - by Hamish Grubijan
    Or decide between a parallel and a sequential operation in general. It is hard to know without testing whether parallel or sequential implementation is best due to overhead. Obviously it will take some time to train "the decider" which method to use. I would say that this method cannot be perfect, so it is probabilistic in nature. The x,y,z do influence "the decider". I think a very naive implementation would be to give both 1/2 chance at the beginning and then start favoring them according to past performance. This disregards x,y,z, however. I suspect that this question would be better answered by academics than practitioners. Anyhow, please share your heuristic, your experience if any, your tips on this. Sample code: public interface IComputer { decimal Compute(decimal x, decimal y, decimal z); } public class SequentialComputer : IComputer { public decimal Compute( ... // sequential implementation } public class ParallelComputer : IComputer { public decimal Compute( ... // parallel implementation } public class HybridComputer : IComputer { private SequentialComputer sc; private ParallelComputer pc; private TheDecider td; // Helps to decide between the two. public HybridComputer() { sc = new SequentialComputer(); pc = new ParallelComputer(); td = TheDecider(); } public decimal Compute(decimal x, decimal y, decimal z) { decimal result; decimal time; if (td.PickOneOfTwo() == 0) { // Time this and save result into time. result = sc.Compute(...); } else { // Time this and save result into time. result = pc.Compute(); } td.Train(time); return result; } }

    Read the article

  • Tree deletion with NHibernate

    - by Tigraine
    Hi, I'm struggling with a little problem and starting to arrive at the conclusion it's simply not possible. I have a Table called Group. As with most of these systems Group has a ParentGroup and a Children collection. So the Table Group looks like this: Group -ID (PK) -Name -ParentId (FK) I did my mappings using FNH AutoMappings, but I had to override the defaults for this: p.References(x => x.Parent) .Column("ParentId") .Cascade.All(); p.HasMany(x => x.Children) .KeyColumn("ParentId") .ForeignKeyCascadeOnDelete() .Cascade.AllDeleteOrphan() .Inverse(); Now, the general idea was to be able to delete a node and all of it's children to be deleted too by NH. So deleting the only root node should basically clear the whole table. I tried first with Cascade.AllDeleteOrphan but that works only for deletion of items from the Children collection, not deletion of the parent. Next I tried ForeignKeyCascadeOnDelete so the operation gets delegated to the Database through on delete cascade. But once I do that MSSql2008 does not allow me to create this constraint, failing with : Introducing FOREIGN KEY constraint 'FKBA21C18E87B9D9F7' on table 'Group' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Well, and that's it for me. I guess I'll just loop through the children and delete them one by one, thus doing a N+1. If anyone has a suggestion on how do that more elegantly I'd be eager to hear it.

    Read the article

< Previous Page | 171 172 173 174 175 176 177 178 179 180 181 182  | Next Page >