Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 626/2673 | < Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >

  • Designing bayesian networks

    - by devoured elysium
    I have a basic question about Bayesian networks. Let's assume we have an engine, that with 1/3 probability can stop working. I'll call this variable ENGINE. If it stops working, then your car doesn't work. If the engine is working, then your car will work 99% of the time. I'll call this one CAR. Now, if your car is old(OLD), instead of not working 1/3 of the time, your engine will stop working 1/2 of the time. I'm being asked to first design the network and then assign all the conditional probabilities associated with the table. I'd say the diagram of this network would be something like OLD -> ENGINE -> CAR Now, for the conditional probabilities tables I did the following: OLD |ENGINE ------------ True | 0.50 False | 0.33 and ENGINE|CAR ------------ True | 0.99 False | 0.00 Now, I am having trouble about how to define the probabilities of OLD. In my point of view, old is not something that has a CAUSE relationship with ENGINE, I'd say it is more a characteristic of it. Maybe there is a different way to express this in the diagram? If the diagram is indeed correct, how would I go to make the tables? Thanks

    Read the article

  • postgresql table for storing automation test results

    - by Martin
    I am building an automation test suite which is running on multiple machines, all reporting their status to a postgresql database. We will run a number of automated tests for which we will store the following information: test ID (a GUID) test name test description status (running, done, waiting to be run) progress (%) start time of test end time of test test result latest screenshot of the running test (updated every 30 seconds) The number of tests isn't huge (say a few thousands) and each machine (say, 50 of them) have a service which checks the database and figures out if it's time to start a new automated test on that machine. How should I organize my SQL table to store all the information? Is a single table with a column per attribute the way to go? If in the future I need to add attributes but want to keep compatibility with old database format (ie I may not want to delete and create a new table with more columns), how should I proceed? Should the new attributes just be in a different table? I'm also thinking of replicating the database. In case of failure, I don't mind if the latest screenshots aren't backed up on the slave database. Should I just store the screenshots in its own table to simplify the replication? Thanks!

    Read the article

  • priority queue with limited space: looking for a good algorithm

    - by SigTerm
    This is not a homework. I'm using a small "priority queue" (implemented as array at the moment) for storing last N items with smallest value. This is a bit slow - O(N) item insertion time. Current implementation keeps track of largest item in array and discards any items that wouldn't fit into array, but I still would like to reduce number of operations further. looking for a priority queue algorithm that matches following requirements: queue can be implemented as array, which has fixed size and _cannot_ grow. Dynamic memory allocation during any queue operation is strictly forbidden. Anything that doesn't fit into array is discarded, but queue keeps all smallest elements ever encountered. O(log(N)) insertion time (i.e. adding element into queue should take up to O(log(N))). (optional) O(1) access for *largest* item in queue (queue stores *smallest* items, so the largest item will be discarded first and I'll need them to reduce number of operations) Easy to implement/understand. Ideally - something similar to binary search - once you understand it, you remember it forever. Elements need not to be sorted in any way. I just need to keep N smallest value ever encountered. When I'll need them, I'll access all of them at once. So technically it doesn't have to be a queue, I just need N last smallest values to be stored. I initially thought about using binary heaps (they can be easily implemented via arrays), but apparently they don't behave well when array can't grow anymore. Linked lists and arrays will require extra time for moving things around. stl priority queue grows and uses dynamic allocation (I may be wrong about it, though). So, any other ideas?

    Read the article

  • How do php apps identify a user after the session has timed out?

    - by Bill Zimmerman
    I am trying to understand how PHP apps check to see if a user is logged in. I am specifically looking at mediawiki's code to try to help me understand, but these cases should be fairly common in all php apps. From what I gather, the main cases are: A user just logged in or was created, every time they visit the page PHP knows its them by checking data common to the $_SESSION variable and the cookie. A user had the 'remember me' option checked on the login page a long time ago. They have a cookie on there computer with a tokenID, which is checked with a token on the server to authenticate them. In this case, there is no session variable, because the time between accesses could be weeks. My question is, what happens when a user is logged in, but the PHP session times out and he wants to access a page? I would have assumed that there is no easy way for the server to know who the person is - and that they would have to be redirected to the login page. However, mediawiki does just that. I've verified that the session files are deleted after X minutes, but when I hit refresh in mediawiki, it knows which user I am, and the 'token' variable is not included in the cookie.

    Read the article

  • Python Socket Getting Connection Reset

    - by Ian
    I created a threaded socket listener that stores newly accepted connections in a queue. The socket threads then read from the queue and respond. For some reason, when doing benchmarking with 'ab' (apache benchmark) using a concurrency of 2 or more, I always get a connection reset before it's able to complete the benchmark (this is taking place locally, so there's no external connection issue). class server: _ip = '' _port = 8888 def __init__(self, ip=None, port=None): if ip is not None: self._ip = ip if port is not None: self._port = port self.server_listener(self._ip, self._port) def now(self): return time.ctime(time.time()) def http_responder(self, conn, addr): httpobj = http_builder() httpobj.header('HTTP/1.1 200 OK') httpobj.header('Content-Type: text/html; charset=UTF-8') httpobj.header('Connection: close') httpobj.body("Everything looks good") data = httpobj.generate() sent = conn.sendall(data) def http_thread(self, id): self.log("THREAD %d: Starting Up..." % id) while True: conn, addr = self.q.get() ip, port = addr self.log("THREAD %d: responding to request: %s:%s - %s" % (id, ip, port, self.now())) self.http_responder(conn, addr) self.q.task_done() conn.close() def server_listener(self, host, port): self.q = Queue.Queue(0) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind( (host, port) ) sock.listen(5) for i in xrange(4): #thread count thread.start_new(self.http_thread, (i+1, )) while True: self.q.put(sock.accept()) sock.close() server('', 9999) When running the benchmark, I get totally random numbers of good requests before it errors out, usually between 4 and 500. Edit: Took me a while to figure it out, but the problem was in sock.listen(5). Because I was using apache benchmark with a higher concurrency (5 and up) it was causing the backlog of connections to pile up, at which point the connections started getting dropped by the socket.

    Read the article

  • Multithreading and Interrupts

    - by Nicholas Flynt
    I'm doing some work on the input buffers for my kernel, and I had some questions. On Dual Core machines, I know that more than one "process" can be running simultaneously. What I don't know is how the OS and the individual programs work to protect collisions in data. There are two things I'd like to know on this topic: (1) Where do interrupts occur? Are they guaranteed to occur on one core and not the other, and could this be used to make sure that real-time operations on one core were not interrupted by, say, file IO which could be handled on the other core? (I'd logically assume that the interrupts would happen on the 1st core, but is that always true, and how would you tell? Or perhaps does each core have its own settings for interrupts? Wouldn't that lead to a scenario where each core could react simultaneously to the same interrupt, possibly in different ways?) (2) How does the dual core processor handle opcode memory collision? If one core is reading an address in memory at exactly the same time that another core is writing to that same address in memory, what happens? Is an exception thrown, or is a value read? (I'd assume the write would work either way.) If a value is read, is it guaranteed to be either the old or new value at the time of the collision? I understand that programs should ideally be written to avoid these kinds of complications, but the OS certainly can't expect that, and will need to be able to handle such events without choking on itself.

    Read the article

  • Scala Interpreter scala.tools.nsc.interpreter.IMain Memory leak

    - by Peter
    I need to write a program using the scala interpreter to run scala code on the fly. The interpreter must be able to run an infinite amount of code without being restarted. I know that each time the method interpret() of the class scala.tools.nsc.interpreter.IMain is called, the request is stored, so the memory usage will keep going up forever. Here is the idea of what I would like to do: var interpreter = new IMain while (true) { interpreter.interpret(some code to be run on the fly) } If the method interpret() stores the request each time, is there a way to clear the buffer of stored requests? What I am trying to do now is to count the number of times the method interpret() is called then get a new instance of IMain when the number of times reaches 100, for instance. Here is my code: var interpreter = new IMain var counter = 0 while (true) { interpreter.interpret(some code to be run on the fly) counter = counter + 1 if (counter > 100) { interpreter = new IMain counter = 0 } } However, I still see that the memory usage is going up forever. It seems that the IMain instances are not garbage-collected by the JVM. Could somebody help me solve this issue? I really need to be able to keep my program running for a long time without restarting, but I cannot afford such a memory usage just for the scala interpreter. Thanks in advance, Pet

    Read the article

  • jQuery Tools alert works once (but only once)

    - by Jim Miller
    I'm trying to build a simple alert mechanism with jQuery Tools -- in response to a bit of Javascript code, pop up an overlay with a message and an OK button that, when clicked, makes the overlay go away. Trivial, or it should be. I've been slavishly following http://flowplayer.org/tools/demos/overlay/trigger.html, and have something that works fine the first time it's invoked, but only that time. If I repeat the JS action that should expose the overlay, it doesn't. My content/DIV: <div class='modal' id='the_alert'> <div id='modal_content' class='modal_content'> <h2>hi there</h2> this is the body <p> <button class='close'>OK</button> </p> </div> <div id='modal_background' class='modal_background'><img src='/images/overlay/f9f9f9-180.png' class='stretch' alt='' /></div> </div> and the Javascript: function showOverlayDialog() { $('#the_alert').overlay({ mask: {color: '#cccccc', loadSpeed: 200, opacity: 0.9}, closeOnClick: false, load: true }); } As I said: When showOverlayDialog() is invoked the first time, the overlay appears just like it should, and goes away when the "OK" button is clicked. But if I cause showOverlayDialog() to run again, without reloading the page, nothing happens. If I reload the page, then the pattern repeats -- the first invocation brings up the overlay, but the second one doesn't. I'm obviously missing something -- any advice out there? Thanks!

    Read the article

  • HOWTO - Compare a date string to datetime in SQL Server?

    - by Guy
    In SQL Server I have a DATETIME column which includes a time element. Example: '14 AUG 2008 14:23:019' What is the best method to only select the records for a particular day, ignoring the time part? Example: (Not safe, as it does not match the time part and returns no rows) DECLARE @p_date DATETIME SET @p_date = CONVERT( DATETIME, '14 AUG 2008', 106 ) SELECT * FROM table1 WHERE column_datetime = @p_date Note: Given this site is also about jotting down notes and techniques you pick up and then forget, I'm going to post my own answer to this question as DATETIME stuff in MSSQL is probably the topic I lookup most in SQLBOL. Update Clarified example to be more specific. Edit Sorry, But I've had to down-mod WRONG answers (answers that return wrong results). @Jorrit: WHERE (date>'20080813' AND date<'20080815') will return the 13th and the 14th. @wearejimbo: Close, but no cigar! badge awarded to you. You missed out records written at 14/08/2008 23:59:001 to 23:59:999 (i.e. Less than 1 second before midnight.)

    Read the article

  • Maintain List of Active Users for Web

    - by Bryan Marble
    Problem Statement - Would like to know if particular web app user is active (i.e. logged in and using site) and be able to query for list of active users or determine a user's activity status. Constraints - Doesn't need to be exact (i.e. if a user was active within a certain timeframe, that's ok to say that they're active even if they've closed their browser). I feel like there should be a design pattern for this type of problem but haven't been able to find anything here or elsewhere on the web. Approaches I'm considering: Maintain a table that is updated any time a user performs an action (or some subset of actions). Would then query for users that have performed an action within some threshold of time. Try to monitor session information and maintain a table that lists logged in users and times out after a certain period of time. Some other more standard way of doing this? How would you approach this problem (again, from a design pattern perspective)? Thanks!

    Read the article

  • How can I use ToUnicode without breaking dead key support?

    - by Cypherjb
    A similar question has already been asked, so I'm not going to waste time re-explaining it, an existing discussion can be found here: http://stackoverflow.com/questions/1964614/toascii-tounicode-in-a-keyboard-hook-destroys-dead-keys The reason I'm posting a new question however is that I seem to have come across a 'solution', but I'm not quite sure how to implement it. This blog post seems to propose a solution to the problem of ToUnicode killing dead-key support: http://blogs.msdn.com/michkap/archive/2005/01/19/355870.aspx However I'm not sure how to implement the suggested solution. A push in the right direction would be greatly appreciated. To be clear, the part I'm referring to is this: "There are two ways to work around this: 1) You can keep calling ToUnicode with the same info until it is cleared out and then call it one more time to put the state back where it was if you had never typed anything, or 2) You can load all of the keyboard info ahead of time and then when they type information you can look up in your own info cache what the keystrokes mean, without having to call APIs later." I'm not quite sure how to do either of those things (keyboards and internationalization are far from my strong point), so any help would be greatly appreciated. Thanks

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • Design Decision - Scaling out web based application's architecture

    - by Vadi
    This question is about design decision. I am currently working on a web project that will have 40K users to start with and in couple of month expected to grow 50M users (not concurrent users though). I would like to have a architecture that can be scaled out easily without much effort. In order to explain, I would like to use a trivial scenario. Lets say, User entities and services such as CreateUser, AuthenticateUser etc., are a simple method calls for the Page Controllers. But once the traffic increases, for example, authenticating user (or such services related to user entities) has to be moved out to a different internal server to spread the load. But at the same time using RPC calls over the network when the user count is 40K would become overkill. My proposal was to use IPC initially and when we need to scale out we can interally switch to TCP based RPC calls so that it can easily scale out. For example, I am referring to System.IO.Pipes.NamedPipeStreamServer to start with and move on to a TcpListener later on. If we have proper design that can encapsulate above said approach, it would easy for us to scale out services into multiple network servers but at the same time avoid network calls when the user count is small. Is this is a best approach? Any suggestions would be great .. Note: The database scaling is definetly the second phase optimization so we have already made architectural design in place to easily partition data when traffic increases. The primary bottleneck would be application servers over the time period.

    Read the article

  • silverlight 3: long running wcf call triggers 401.1 (access denied)

    - by sympatric greg
    I have a wcf service consumed by a silverlight 3 control. The Silverlight client uses a basicHttpBindinging that is constructed at runtime from the control's initialization parameters like this: public static T GetServiceClient<T>(string serviceURL) { BasicHttpBinding binding = new BasicHttpBinding(Application.Current.Host.Source.Scheme.Equals("https", StringComparison.InvariantCultureIgnoreCase) ? BasicHttpSecurityMode.Transport : BasicHttpSecurityMode.None); binding.MaxReceivedMessageSize = int.MaxValue; binding.MaxBufferSize = int.MaxValue; binding.Security.Mode = BasicHttpSecurityMode.TransportCredentialOnly; return (T)Activator.CreateInstance(typeof(T), new object[] { binding, new EndpointAddress(serviceURL)}); } The Service implements windows security. Calls were returning as expected until the result set increased to several thousand rows at which time HTTP 401.1 errors were received. The Service's HttpBinding defines closeTime, openTimeout, receiveTimeout and sendTimeOut of 10 minutes. If I limit the size of the resultset the call suceeds. Additional Observations from Fiddler: When Method2 is modified to return a smaller resultset (and avoid the problem), control initialization consists of 4 calls: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:200 (1.25 seconds) When Method2 is configured to return the larger resultset we get: Service1/Method1 -- result:401 Service1/Method1 -- result:401 (this time header includes element "Authorization: Negotiate TlRMTV..." Service1/Method1 -- result:200 Service1/Method2 -- result:401.1 (7.5 seconds) Service1/Method2 -- result:401.1 (15ms) Service1/Method2 -- result:401.1 (7.5 seconds)

    Read the article

  • What is your workflow when designing HTML/CSS layouts?

    - by DMin
    I have been working with PHP/MySQL as a hobby for close to a couple of years now, I have been working with photoshop for a very long time, I know CSS & HTML well enough to write without any reference, so, I would not consider myself someone who's very new at this. I have recently started developing websites professionally - (only person working on the project). I have seen the power of Joomla and how you can make a website ready for your customer in a matter of hours(if not minutes). I find it very hard to make layouts that remotely look like the themes on joomla. I find making even simple layouts a very cumbersome process and takes a lot of time to get a good enough output. I have a feeling I may not be using the right tools or workflow for the job. What I wanted to findout was, as part of the industry : How do, you, make your website when you do it from scratch? What are the tools that you use? What is your workflow? Just noted a few things I know already, for your reference(You can skip this if you like) I have seen the export for web for Photoshop that exports CSS - but (as far as I know) exports only absolutely positioned webpages so they need to be beaten and fixed if you want to use them for example for joomla. I have used the SiteGrinder plugin for Photoshop that exports HTML/CSS. It looks promising but haven't tried it extensively. One of the tools that save loads of time is FireBug. This makes it easy to edit html and css on the fly and get the page looking exactly as you want it. Recently stumbled upon fireworks. But haven't explored it much. Thanks! :)

    Read the article

  • unix at command pass variable to shell script?

    - by Andrew
    Hi, I'm trying to setup a simple timer that gets started from a Rails Application. This timer should wait out its duration and then start a shell script that will start up ./script/runner and complete the initial request. I need script/runner because I need access to ActiveRecord. Here's my test lines in Rails output = `at #{(Time.now + 60).strftime("%H:%M")} < #{Rails.root}/lib/parking_timer.sh STRING_VARIABLE` return render :text => output Then my parking_timer.sh looks like this #!/bin/sh ~/PATH_TO_APP/script/runner -e development ~/PATH_TO_APP/lib/ParkingTimer.rb $1 echo "All Done" Finally, ParkingTimer.rb reads the passed variable with ARGV.each do|a| puts "Argument: #{a}" end The problem is that the Unix command "at" doesn't seem to like variables and only wants to deal with filenames. I either get one of two errors depending on how I position "s If I put quotes around the right hand side like so ... "~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE" I get, -bash: ~/PATH_TO_APP/lib/parking_timer.sh STRING_VARIABLE: No such file or directory I I leave the quotes out, I get, at: garbled time This is all happening on a Mac OS 10.6 box running Rails 2.3 & Ruby 1.8.6 I've already messed around w/ BackgrounDrb, and decided its a total PITA. I need to be able to cancel the job at any time before it is due.

    Read the article

  • Persistent warning message about "initWithDelegate"!

    - by RickiG
    Hi This is not an actual Xcode error message, it is a warning that has been haunting me for a long time. I have found no way of removing it and I think I maybe have overstepped some unwritten naming convention rule. If I build a class, most often extending NSObject, whose only purpose is to do some task and report back when it has data, I often give it a convenience constructor like "initWithDelegate". The first time I did this in my current project was for a class called ISWebservice which has a protocol like this: @protocol ISWebserviceDelegate @optional - (void) serviceFailed:(NSError*) error; - (void) serviceSuccess:(NSArray*) data; @required @end Declared in my ISWebservice.h interface, right below my import statements. I have other classes that uses a convenience constructor named "initWithDelegate". E.g. "InternetConnectionLost.h", this class does not however have its methods as optional, there are no @optional @required tags in the declaration, i.e. they are all required. Now my warning pops up every time I instantiate one of these Classes with convenience constructors written later than the ISWebservice, so when utilizing the "InternetConnectionLost" class, even though the entire Class owning the "InternetConnectionLost" object has nothing to do with the "ISWebservice" Class, no imports, methods being called, no nothing, the warning goes: 'ClassOwningInternetConnectionLost' does not implement the 'ISWebserviceDelegate' protocol I does not break anything, crash at runtime or do me any harm, but it has begun to bug me as I near release. Also, because several classes use the "initWithDelegate" constructor naming, I have 18 of these warnings in my build results and I am getting uncertain if I did something wrong, being fairly new at this language. Hope someone can shed a little light on this warning, thank you:)

    Read the article

  • Intent.putExtras not consistent

    - by martinjd
    I have a weird situation with AlarmManager. I am scheduling an event with AlarmManager and passing in a string using intent.putExtra. The string is either silent or vibrate and when the receiver fires the phone should either turn of the ringer or set the phone to vibrate. The log statement correctly outputs the expected value each time. Intent intent; if (eventType.equals("start")) { intent = new Intent(context, SReceiver.class); } else { intent = new Intent(context, EReceiver.class); } intent.setAction(eventType+Long.toString(newId)); Log.v("EditQT",ringerModeType.toUpperCase()); intent.putExtra("ringerModeType", ringerModeType.toUpperCase()); PendingIntent appIntent = PendingIntent.getBroadcast(context, 0, intent, 0); AlarmManager alarmManager = (AlarmManager) getSystemService (Context.ALARM_SERVICE); alarmManager.set(AlarmManager.RTC_WAKEUP, calendar.getTimeInMillis(), appIntent); The receiver that fires when the alarm executes also has a log statement and I can see the first time around that the statement outputs the expected string either SILENT or VIBRATE. The alarm executes and then I change the value for putExtra to opposite string and the receiver still displays the previous value event though the call from the code above shows that the new value was passed in. The value for setAction is the same each time. audioManager = (AudioManager) context.getSystemService(Activity.AUDIO_SERVICE); Log.v("Start",intent.getExtras().get("ringerModeType").toString()); if (intent.getExtras().get("ringerModeType").equals("SILENTMODE")) { audioManager.setRingerMode(AudioManager.RINGER_MODE_SILENT); } else { audioManager.setRingerMode(AudioManager.RINGER_MODE_VIBRATE); } Any thoughts?

    Read the article

  • Javascript/PHP and timezones

    - by James
    Hi, I'd like to be able to guess the user's timezone offset and whether or not daylight savings is being applied. Currently, the most definitive code that I've found for this is here: http://www.michaelapproved.com/articles/daylight-saving-time-dst-detect/ So this gives me the offset along with the DST indicator. Now, I want to use these in my PHP scripts in order to ouput the local date/time for the user....but what's best for this? I figure I have 2 options: a) Pick a random timezone which has the same offset and DST setting from the output of timezone_abbreviations_list(). Then call date_timezone_set() with this in order to apply the correct treatment to the time. b) Continue treating the date as UTC but just do some timestamp addition to add the appropriate number of hours on. My feeling is that option B is the best way. The reason for this is that with A, I could be using a timezone which although correct in terms of offset/dst, may have some obscur rules in place behind the scene that could give surprising results (I don't know of any but nonetheless I don't think I can rule it out). I'd then re-check the timezone using Javascript at the start of each session in order to capture when either the user's timezone changes (very unlikely) or they pass in to the DST period. Sorry for the brain dump - I'm really just after some sort of reassurance that the approaches above are valid. Thanks, James.

    Read the article

  • How to insert an Array/Objet into SQL (bestpractice)

    - by Jason
    I need to store three items as an array in a single column and be able to quickly/easily modify that data in later functions. [---YOU CAN SKIP THIS PART IF YOU TRUST ME--] To be clear, I love and use x_ref tables all the time but an x_ref doesn't work here because this is not a one-to-many relationship. I am making a project management tool that among other things, assigns a user to a project and assigns hours to that project on a weekly basis, per user, sometimes for weeks many weeks into the future. Of course there are many projects, a project can have many team members, a team member can be involved with many projects at one time BUT its not one-to-many because a team member can be working many weeks on the same project but have different hours for different weeks. In other words, each object really is unique. Also/finally, this data can be changed at any time by any team-member - hence it needs to be easily to manipulate. Basically, I need to handle three values (the team member, the week we're talking about, and how many hours) dropped into a project row in the projects table (under the column for project team members) and treated as one item - a team member - that will actually be part of a larger array of all the team members involved on the project. [--END SKIP, START READING HERE :) --] So assuming that the application's general schema and relation tables aren't total crap and that we are in fact up against a wall in this one case to use an array/object as a value for this column, is there a best practice for that? Like a particular SQL data-type? A particular object/array format? CSV? JSON? XML? Most of the app is in C# but (for very odd reasons that I won't explain) we could really use any environment if there is a particular one that handles this well. For the moment, I am thinking either (webservice + JS/JSON) or PHP unserialize/serialize (but I am bit sketched out by the PHP solution because it seems a bit cumbersome when using ajax?) Thoughts anyone?

    Read the article

  • Avoiding GC thrashing with WSE 3.0 MTOM service

    - by Leon Breedt
    For historical reasons, I have some WSE 3.0 web services that I cannot upgrade to WCF on the server side yet (it is also a substantial amount of work to do so). These web services are being used for file transfers from client to server, using MTOM encoding. This can also not be changed in the short term, for reasons of compatibility. Secondly, they are being called from both Java and .NET, and therefore need to be cross-platform, hence MTOM. How it works is that an "upload" WebMethod is called by the client, sending up a chunk of data at a time, since files being transferred could potentially be gigabytes in size. However, due to not being able to control parts of the stack before the WebMethod is invoked, I cannot control the memory usage patterns of the web service. The problem I am running into is for file sizes from 50MB or so onwards, performance is absolutely killed because of GC, since it appears that WSE 3.0 buffers each chunk received from the client in a new byte[] array, and by the time we've done 50MB we're spending 20-30% of time doing GC. I've played with various chunk sizes, from 16k to 2MB, with no real great difference in results. Smaller chunks are killed by the latency involved with round-tripping, and larger chunks just postpone the slowdown until GC kicks in. Any bright ideas on cutting down on the garbage created by WSE? Can I plug into the pipeline somehow and jury-rig something that has access to the client's request stream and streams it to the WebMethod? I'm aware that it is possible to "stream" responses to the client using WSE (albeit very ugly), but this problem is with requests from the client.

    Read the article

  • Flex 3: should I provide prepared data to my component or make it to process data before display?

    - by grapkulec
    I'm starting to learn a little Flex just for fun and maybe to prove that I still can learn something new :) I have some idea for a project and one of its parts is a tree component which could display data in different ways depending on configuration. The idea There is list of objects having properties like id, date, time, name, description. And sometimes list should be displayed like this: first level: date second level: time third level: name and sometimes like this: first level: year second level: month third level: day fourth level: time and name By level I mean level of nesting of course. So, we can have years, that have months, that have days, that have hours and so forth. The problem What could be the best way to do it? I mean, should I prepare data for different ways of nesting outside of component or even outside of flex? I can do it at web service level in C# where I plan to have database access layer and send to flex nice and ready to display XML or array of objects. But I wonder if that won't cause additional and maybe unneccessary network traffic. I tried to hack some code in my component to convert my data objects into XML or ArrayCollection but I don't know enough of Flex and got stuck on elimination of duplicates or getting specific data by some key value. Usually to do such things I have STL with maps, sets and vectors and I find Flex arrays and even Dictionary a little bit confusing (I've read language reference and googled without any significant luck). The question So, to sum things up: should I give my tree component data prepared just for chosen type of display or should I try to do it internally inside component (or some helper class written in ActionScript)?

    Read the article

  • Google App Engine - Caching generated HTML

    - by Alexander
    I have written a Google App Engine application that programatically generates a bunch of HTML code that is really the same output for each user who logs into my system, and I know that this is going to be in-efficient when the code goes into production. So, I am trying to figure out the best way to cache the generated pages. The most probable option is to generate the pages and write them into the database, and then check the time of the database put operation for a given page against the time that the code was last updated. Then, if the code is newer than the last put to the database (for a particular HTML request), new HTML will be generated and served, and cached to the database. If the code is older than the last put to the database, then I will just get the HTML direct from the database and serve it (therefore avoiding all the CPU wastage of generating the HTML). I am not only looking to minimize load times, but to minimize CPU usage. However, one issue that I am having is that I can't figure out how to programatically check when the version of code uploaded to the app engine was updated. I am open to any suggestions on this approach, or other approaches for caching generated html. Note that while memcache could help in this situation, I believe that it is not the final solution since I really only need to re-generate html when the code is updated (as opposed to every time the memcache expires). Kind Regards, and thank you in advance for any suggestions you may be able to offer. -Alex

    Read the article

  • R: disentangling scopes

    - by rescdsk
    Hi, Right now, in my R project, I have functions1.R with doFoo() and doBar(), functions2.R with other functions, and main.R with the main program in it, which first does source('functions1.R'); source('functions2.R'), and then calls the other functions. I've been starting the program from the R GUI in Mac OS X, with source('main.R'). This is fine the first time, but after that, the variables that were defined the first time through the program are defined for the second time functions*.R are sourced, and so the functions get a whole bunch of extra variables defined. I don't want that! I want an "undefined variable" error when my function uses a variable it shouldn't! Twice this has given me very late nights of debugging! So how do other people deal with this sort of problem? Is there something like source(), but that makes an independent namespace that doesn't fall through to the main one? Making a package seems like one solution, but it seems like a big pain in the butt compared to e.g. Python, where a source file is automatically a separate namespace. Any tips? Thank you!

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

< Previous Page | 622 623 624 625 626 627 628 629 630 631 632 633  | Next Page >