Search Results

Search found 59959 results on 2399 pages for 'time complexity'.

Page 469/2399 | < Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >

  • Creating Lotus Notes documents with specific created/modified/last accessed dates for testing

    - by Xolstice
    I'm currently writing an application that moves Notes documents between databases based on the amount of days that have elapsed from the creation/modified/last accessed dates. I would just like to get ideas on a simple and convenient way to create documents with specific dates, without having to change the time on the Domino server, so that I could test out my application. The best way I found so far was to create a local replica and change the system clock to the date I want. Unfortunately there are problems associated with this method. It does not work on the modified date - I'm not sure how it is getting the modified date information when the location is set to Island (Disconnected) - and it also changes the modified and last accessed dates when the documents are replicated to the server replica. Someone suggested trying to create a DXL of the document, modify the date time in the DXL file, then import it back into the database as a Notes document; but that does not work. It just takes on the date-time that it was created. Can anyone offer any other suggestions?

    Read the article

  • Constructing human readable sentences based on a survey

    - by Joshua
    The following is a survey given to course attendees to assess an instructor at the end of the course. Communication Skills 1. The instructor communicated course material clearly and accurately. Yes No 2. The instructor explained course objectives and learning outcomes. Yes No 3. In the event of not understanding course materials the instructor was available outside of class. Yes No 4. Was instructor feedback and grading process clear and helpful? Yes No 5. Do you feel that your oral and written skills have improved while in this course? Yes No We would like to summarize each attendees selection based on the choices chosen by him. If the provided answers were [No, No, Yes, Yes, Yes]. Then we would summarize this as "The instructor was not able to summarize course objectives and learning outcomes clearly, but was available for usually helpful outside of class. The instructor feedback and grading process was clear and helpful and I feel that my oral and written skills have improved because of this course. Based on the selections chosen by the attendee the summary would be quite different. This leads to many answers based on the choices selected and the number of such questions in the survey. The questions are usually provided by the training organization. How do you come up with a generic solution so that this can be effectively translated into a human readable form. I am looking for tools or libraries (java based), suggestions which will help me create such human readable output. I would like to hide the complexity from the end users as much as possible.

    Read the article

  • Incremental Timer

    - by Donal Rafferty
    I'm currently using a Timer and TimerTask to perform some work every 30 seconds. My problem is that after each time I do this work I want to increment the interval time of the Timer. So for example it starts off with 30 seconds between the timer firing but I want to add 10 seconds to the interval then so that the next time the Timer takes 40 seconds before it fires. Here is my current code: public void StartScanning() { scanTask = new TimerTask() { public void run() { handler.post(new Runnable() { public void run() { wifiManager.startScan(); scanCount++; if(SCAN_INTERVAL_TIME <= SCAN_MAX_INTERVAL){ SCAN_INTERVAL_TIME = SCAN_INTERVAL_TIME + SCAN_INCREASE_INTERVAL; t.schedule(scanTask, 0, SCAN_INTERVAL_TIME); } } }); }}; Log.d("SCAN_INTERVAL_TIME ** ", "SCAN_INTERVAL_TIME ** = " + SCAN_INTERVAL_TIME); t.schedule(scanTask, 0, SCAN_INTERVAL_TIME); } But the above gives the following error: 05-26 11:48:02.472: ERROR/AndroidRuntime(4210): java.lang.IllegalStateException: TimerTask is scheduled already Calling cancel or purge doesn't help. So I was wondering if anyone can help me find a solution? Is a timer even the right way to approach this?

    Read the article

  • Designing bayesian networks

    - by devoured elysium
    I have a basic question about Bayesian networks. Let's assume we have an engine, that with 1/3 probability can stop working. I'll call this variable ENGINE. If it stops working, then your car doesn't work. If the engine is working, then your car will work 99% of the time. I'll call this one CAR. Now, if your car is old(OLD), instead of not working 1/3 of the time, your engine will stop working 1/2 of the time. I'm being asked to first design the network and then assign all the conditional probabilities associated with the table. I'd say the diagram of this network would be something like OLD -> ENGINE -> CAR Now, for the conditional probabilities tables I did the following: OLD |ENGINE ------------ True | 0.50 False | 0.33 and ENGINE|CAR ------------ True | 0.99 False | 0.00 Now, I am having trouble about how to define the probabilities of OLD. In my point of view, old is not something that has a CAUSE relationship with ENGINE, I'd say it is more a characteristic of it. Maybe there is a different way to express this in the diagram? If the diagram is indeed correct, how would I go to make the tables? Thanks

    Read the article

  • How to target multiple versions of .NET Framework from MSBuild?

    - by McKAMEY
    I am improving the builds for an open source project which currently supports .NET Framework v2.0, v3.5, and now v4.0. Up until now, I've restricted myself to v2.0 to ensure compatibility, but with VS2010 I am interested in having real targeted builds. I'm looking for some guidance on how to edit the MSBuild csproj/sln to be able to cleanly produce builds for each target. I'm willing to have complexity in the csproj and in a batch file to control the build. My goal is to be able to have a command line script that could produce the builds without needing Visual Studio installed, but only the necessary .NET Framework(s). Ideally, I'd like to minimize dependencies on additional software. I notice that a lot of people use NAnt (e.g. Ninject builds many targets with NAnt) but I'm unsure if this is necessary or if they are just more familiar with it. I'm pretty sure this can be done but am having trouble finding a definitive guide on setting it up and best practices. Bonus: my next step after getting this set up will be to better support Mono Framework. Any help on doing this same thing for Mono would be much appreciated.

    Read the article

  • How to protect copyright on large web applications?

    - by Saif Bechan
    Recently I have read about Myows, described as "the universal copyright management and protection app for smart creatives". It is used to protect copyright and more. Currently I am working on a large web application which is in late testing phase. Because of the complexity of the app there are not many versions of it online so copyright will be a huge issue for me as much of the code is in JavaScript and is easy copyable. I was glad to see that there is some company out there that provide such services, and naturally I wanted to know if there were people using it. I did not know that this type of concept was so new. Is protecting copyright a good idea for a large web application? If so, do you think Myows will be worth using, or are there better ways to achieve that? Update: Wow, there is no better person to have answered this question like the creator himself. There were some nice points made in the answer, and I think it will be a good service for people like me. In the next couple of weeks I will be looking further into the subject and start uploading my code and see how it works out. I will leave this question up because I do want some more suggestions on this topic.

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • priority queue with limited space: looking for a good algorithm

    - by SigTerm
    This is not a homework. I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further. looking for a priority queue algorithm that matches following requirements: queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden. Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered. O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))). (optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations) Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever. Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored. I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though). So, any other ideas?

    Read the article

  • Help understanding the Single Responsibility Principle

    - by user204588
    I'm trying to understand what a responsibility actually is so I want to use an example of something I'm currently working on. I have a app that imports product information from one system to another system. The user of the apps gets to choose various settings for which product fields in one system that want to use in the other system. So I have a class, say ProductImporter and it's responsibility is to import products. This class is large, probably too large. The methods in this class are complex and would be for example, getDescription. This method doesn't simply grab a description from the other system but sets a product description based on various settings set by the user. If I were to add a setting and a new way to get a description, this class could change. So, is that two responsibilities? Is there one that imports products and one that gets a description. It would seem this way, almost every method I have would be in it's own class and that seems like overkill. I really need a good description of this principle because it's hard for me to completely understand. I don't want needless complexity.

    Read the article

  • Maintaining state and data context between requests in ASP.NET + EF4

    - by Nick
    I have a EF4/ASP.NET web application that is structured to use POCOs and generic repositories, based essentially on this excellent article. The application is relatively sophisticated with one page that involves selection and linking of multiple entities to build up a complex user profile. This requires access to multiple entity types (20 or so) and associated repositories across multiple posts. When a repository is first accessed it uses the existing data context if exists, else it creates a new context. The problem is that if the lifetime of the context is only per-request (as suggested in the article) then you have to deal with multiple contexts and the complexity around detaching and attaching entities from contexts. My solution is to share the context between posts by creating a single View Model that includes all required repositories (initialised to share the same context) plus any associated data and store this model in a Session variable, retrieving from Session on subsequent page requests. Therefore maintaining the same context across all posts until the profile is saved. This works fine BUT I am concerned that I don't actually know exactly what is stored in the model session variable or more importantly the size of the Session variable. So two questions I suppose: firstly should I look for a better solution to handle the shared context across posts issue (any suggestions welcome)? And secondly what is actually stored in the Session when it includes a repository plus context? Any help appreciated!

    Read the article

  • How do php apps identify a user after the session has timed out?

    - by Bill Zimmerman
    I am trying to understand how PHP apps check to see if a user is logged in. I am specifically looking at mediawiki's code to try to help me understand, but these cases should be fairly common in all php apps. From what I gather, the main cases are: A user just logged in or was created, every time they visit the page PHP knows its them by checking data common to the $_SESSION variable and the cookie. A user had the 'remember me' option checked on the login page a long time ago. They have a cookie on there computer with a tokenID, which is checked with a token on the server to authenticate them. In this case, there is no session variable, because the time between accesses could be weeks. My question is, what happens when a user is logged in, but the PHP session times out and he wants to access a page? I would have assumed that there is no easy way for the server to know who the person is - and that they would have to be redirected to the login page. However, mediawiki does just that. I've verified that the session files are deleted after X minutes, but when I hit refresh in mediawiki, it knows which user I am, and the 'token' variable is not included in the cookie.

    Read the article

  • How to pass binary data between two apps using Content Provider?

    - by Viktor
    I need to pass some binary data between two android apps using Content Provider (sharedUserId is not an option). I would prefer not to pass the data (a savegame stored as a file, small in size < 20k) as a file (ie. overriding openFile()) since this would necessitate some complicated temp-file scheme to cope with concurrency with several content provider accesses and a running game. I would like to read the file into memory under a mutex lock and then pass the binary array in the simplest way possible. How do I do this? It seems creating a file in memory is not a possibility due to the return type of openFile(). query() needs to return a Cursor. Using MatrixCursor is not possible since it applies toString() to all stored objects when reading it. What do I need to do? Implement a custom Cursor? This class has 30 abstract methods. Do I read the file, put it in a SQLite db and return the cursor? The complexity of this seemingly simple task is mindboggling.

    Read the article

  • Small-o(n^2) implementation of Polynomial Multiplication

    - by AlanTuring
    I'm having a little trouble with this problem that is listed at the back of my book, i'm currently in the middle of test prep but i can't seem to locate anything regarding this in the book. Anyone got an idea? A real polynomial of degree n is a function of the form f(x)=a(n)x^n+?+a1x+a0, where an,…,a1,a0 are real numbers. In computational situations, such a polynomial is represented by a sequence of its coefficients (a0,a1,…,an). Assuming that any two real numbers can be added/multiplied in O(1) time, design an o(n^2)-time algorithm to compute, given two real polynomials f(x) and g(x) both of degree n, the product h(x)=f(x)g(x). Your algorithm should **not** be based on the Fast Fourier Transform (FFT) technique. Please note it needs to be small-o(n^2), which means it complexity must be sub-quadratic. The obvious solution that i have been finding is indeed the FFT, but of course i can't use that. There is another method that i have found called convolution, where if you take polynomial A to be a signal and polynomial B to be a filter. A passed through B yields a shifted signal that has been "smoothed" by A and the resultant is A*B. This is supposed to work in O(n log n) time. Of course i am completely unsure of implementation. If anyone has any ideas of how to achieve a small-o(n^2) implementation please do share, thanks.

    Read the article

  • Python Socket Getting Connection Reset

    - by Ian
    I created a threaded socket listener that stores newly accepted connections in a queue. The socket threads then read from the queue and respond. For some reason, when doing benchmarking with 'ab' (apache benchmark) using a concurrency of 2 or more, I always get a connection reset before it's able to complete the benchmark (this is taking place locally, so there's no external connection issue). class server: _ip = '' _port = 8888 def __init__(self, ip=None, port=None): if ip is not None: self._ip = ip if port is not None: self._port = port self.server_listener(self._ip, self._port) def now(self): return time.ctime(time.time()) def http_responder(self, conn, addr): httpobj = http_builder() httpobj.header('HTTP/1.1 200 OK') httpobj.header('Content-Type: text/html; charset=UTF-8') httpobj.header('Connection: close') httpobj.body("Everything looks good") data = httpobj.generate() sent = conn.sendall(data) def http_thread(self, id): self.log("THREAD %d: Starting Up..." % id) while True: conn, addr = self.q.get() ip, port = addr self.log("THREAD %d: responding to request: %s:%s - %s" % (id, ip, port, self.now())) self.http_responder(conn, addr) self.q.task_done() conn.close() def server_listener(self, host, port): self.q = Queue.Queue(0) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind( (host, port) ) sock.listen(5) for i in xrange(4): #thread count thread.start_new(self.http_thread, (i+1, )) while True: self.q.put(sock.accept()) sock.close() server('', 9999) When running the benchmark, I get totally random numbers of good requests before it errors out, usually between 4 and 500. Edit: Took me a while to figure it out, but the problem was in sock.listen(5). Because I was using apache benchmark with a higher concurrency (5 and up) it was causing the backlog of connections to pile up, at which point the connections started getting dropped by the socket.

    Read the article

  • Scala Interpreter scala.tools.nsc.interpreter.IMain Memory leak

    - by Peter
    I need to write a program using the scala interpreter to run scala code on the fly. The interpreter must be able to run an infinite amount of code without being restarted. I know that each time the method interpret() of the class scala.tools.nsc.interpreter.IMain is called, the request is stored, so the memory usage will keep going up forever. Here is the idea of what I would like to do: var interpreter = new IMain while (true) { interpreter.interpret(some code to be run on the fly) } If the method interpret() stores the request each time, is there a way to clear the buffer of stored requests? What I am trying to do now is to count the number of times the method interpret() is called then get a new instance of IMain when the number of times reaches 100, for instance. Here is my code: var interpreter = new IMain var counter = 0 while (true) { interpreter.interpret(some code to be run on the fly) counter = counter + 1 if (counter > 100) { interpreter = new IMain counter = 0 } } However, I still see that the memory usage is going up forever. It seems that the IMain instances are not garbage-collected by the JVM. Could somebody help me solve this issue? I really need to be able to keep my program running for a long time without restarting, but I cannot afford such a memory usage just for the scala interpreter. Thanks in advance, Pet

    Read the article

  • SQL 2008: Using separate tables for each datatype to return single row

    - by Thomas C
    Hi all I thought I'd be flexible this time around and let the users decide what contact information the wish to store in their database. In theory it would look as a single row containing, for instance; name, adress, zipcode, Category X, Listitems A. Example FieldType table defining the datatypes available to a user: FieldTypeID, FieldTypeName, TableName 1,"Integer","tblContactInt" 2,"String50","tblContactStr50" ... A user the define his fields in the FieldDefinition table: FieldDefinitionID, FieldTypeID, FieldDefinitionName 11,2,"Name" 12,2,"Adress" 13,1,"Age" Finally we store the actual contact data in separate tables depending on its datatype. Master table, only contains the ContactID tblContact: ContactID 21 22 tblContactStr50: ContactStr50ID,ContactID,FieldDefinitionID,ContactStr50Value 31,21,11,"Person A" 32,21,12,"Adress of person A" 33,22,11,"Person B" tblContactInt: ContactIntID,ContactID,FieldDefinitionID,ContactIntValue 41,22,13,27 Question: Is it possible to return the content of these tables in two rows like this: ContactID,Name,Adress,Age 21,"Person A","Adress of person A",NULL 22,"Person B",NULL,27 I have looked into using the COALESCE and Temp tables, wondering if this is at all possible. Even if it is: maybe I'm only adding complexity whilst sacrificing performance for benefit in datastorage and user definition option. What do you think? Best Regards /Thomas C

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

  • jQuery Tools alert works once (but only once)

    - by Jim Miller
    I'm trying to build a simple alert mechanism with jQuery Tools -- in response to a bit of Javascript code, pop up an overlay with a message and an OK button that, when clicked, makes the overlay go away. Trivial, or it should be. I've been slavishly following http://flowplayer.org/tools/demos/overlay/trigger.html, and have something that works fine the first time it's invoked, but only that time. If I repeat the JS action that should expose the overlay, it doesn't. My content/DIV: <div class='modal' id='the_alert'> <div id='modal_content' class='modal_content'> <h2>hi there</h2> this is the body <p> <button class='close'>OK</button> </p> </div> <div id='modal_background' class='modal_background'><img src='/images/overlay/f9f9f9-180.png' class='stretch' alt='' /></div> </div> and the Javascript: function showOverlayDialog() { $('#the_alert').overlay({ mask: {color: '#cccccc', loadSpeed: 200, opacity: 0.9}, closeOnClick: false, load: true }); } As I said: When showOverlayDialog() is invoked the first time, the overlay appears just like it should, and goes away when the "OK" button is clicked. But if I cause showOverlayDialog() to run again, without reloading the page, nothing happens. If I reload the page, then the pattern repeats -- the first invocation brings up the overlay, but the second one doesn't. I'm obviously missing something -- any advice out there? Thanks!

    Read the article

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • Maintain List of Active Users for Web

    - by Bryan Marble
    Problem Statement - Would like to know if particular web app user is active (i.e. logged in and using site) and be able to query for list of active users or determine a user's activity status. Constraints - Doesn't need to be exact (i.e. if a user was active within a certain timeframe, that's ok to say that they're active even if they've closed their browser). I feel like there should be a design pattern for this type of problem but haven't been able to find anything here or elsewhere on the web. Approaches I'm considering: Maintain a table that is updated any time a user performs an action (or some subset of actions). Would then query for users that have performed an action within some threshold of time. Try to monitor session information and maintain a table that lists logged in users and times out after a certain period of time. Some other more standard way of doing this? How would you approach this problem (again, from a design pattern perspective)? Thanks!

    Read the article

  • What libraries will parse a DTD using PHP

    - by Chadwick
    I need to parse DTDs using PHP and am hoping there's a simple library to help out. Each DTD has numerous <!ENTITY... and <!-- Comment... elements, which I need to act upon. Note that I do not need to validate anything against these DTDs, simply parse them as data files themselves. A few options I've looked at: James Clarke's SD, which is an option of last resort, but I'd like to avoid the complexity of building/installing/configuring code external to PHP. I'm not sure it's even possible in my situation. PEAR has an XML_DTD_Parser, which requires installing/configuring PEAR and a number of pear modules, which I'm also not sure is possible, and would rather avoid. Has anyone used it with success? PHP XML Classes has the class_path_parser, which another site suggested, but it fails to read ENTITY elements. It appears to be using PHP's built in XML parsing capabilities, which use EXPAT. PHP's DOMDocument will validate against a DTD, so must be able to read them, though I don't see how to get at the DTD parser directly at first glance.

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

  • Intent.putExtras not consistent

    - by martinjd
    I have a weird situation with AlarmManager. I am scheduling an event with AlarmManager and passing in a string using intent.putExtra. The string is either silent or vibrate and when the receiver fires the phone should either turn of the ringer or set the phone to vibrate. The log statement correctly outputs the expected value each time. Intent intent; if (eventType.equals("start")) { intent = new Intent(context, SReceiver.class); } else { intent = new Intent(context, EReceiver.class); } intent.setAction(eventType+Long.toString(newId)); Log.v("EditQT",ringerModeType.toUpperCase()); intent.putExtra("ringerModeType", ringerModeType.toUpperCase()); PendingIntent appIntent = PendingIntent.getBroadcast(context, 0, intent, 0); AlarmManager alarmManager = (AlarmManager) getSystemService (Context.ALARM_SERVICE); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), appIntent); The receiver that fires when the alarm executes also has a log statement and I can see the first time around that the statement outputs the expected string either SILENT or VIBRATE. The alarm executes and then I change the value for putExtra to opposite string and the receiver still displays the previous value event though the call from the code above shows that the new value was passed in. The value for setAction is the same each time. audioManager = (AudioManager) context.getSystemService(Activity.AUDIO_SERVICE); Log.v("Start",intent.getExtras().get("ringerModeType").toString()); if (intent.getExtras().get("ringerModeType").equals("SILENTMODE")) { audioManager.setRingerMode(AudioManager.RINGER_MODE_SILENT); } else { audioManager.setRingerMode(AudioManager.RINGER_MODE_VIBRATE); } Any thoughts?

    Read the article

< Previous Page | 465 466 467 468 469 470 471 472 473 474 475 476  | Next Page >