Search Results

Search found 4272 results on 171 pages for 'processes'.

Page 145/171 | < Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >

  • Textually diffing JSON

    - by Richard Levasseur
    As part of my release processes, I have to compare some JSON configuration data used by my application. As a first attempt, I just pretty-printed the JSON and diff'ed them (using kdiff3 or just diff). As that data has grown, however, kdiff3 confuses different parts in the output, making additions look like giant modifies, odd deletions, etc. It makes it really hard to figure out what is different. I've tried other diff tools, too (meld, kompare, diff, a few others), but they all have the same problem. Despite my best efforts, I can't seem to format the JSON in a way that the diff tools can understand. Example data: [ { "name": "date", "type": "date", "nullable": true, "state": "enabled" }, { "name": "owner", "type": "string", "nullable": false, "state": "enabled", } ...lots more... ] The above probably wouldn't cause the problem (the problem occurs when there begin to be hundreds of lines), but thats the gist of what is being compared. Thats just a sample; the full objects are 4-5 attributes, and some attributes have 4-5 attributes in them. The attribute names are pretty uniform, but their values pretty varied. In general, it seems like all the diff tools confuse the closing "}" with the next objects closing "}". I can't seem to break them of this habit. I've tried adding whitespace, changing indentation, and adding some "BEGIN" and "END" strings before and after the respective objects, but the tool still get confused.

    Read the article

  • Bash: how to simply parallelize tasks?

    - by NoozNooz42
    I'm writing a tiny script that calls the "PNGOUT" util on a few hundred PNG files. I simply did this: find $BASEDIR -iname "*png" -exec pngout {} \; And then I looked at my CPU monitor and noticed only one of the core was used, which is quite sad. In this day and age of dual, quad, octo and hexa (?) cores desktop, how do I simply parallelize this task with Bash? (it's not the first time I've had such a need, for quite a lot of these utils are mono-threaded... I already had the case with mp3 encoders). Would simply running all the pngout in the background do? How would my find command look like then? (I'm not too sure how to mix find and the '&' character) I if have three hundreds pictures, this would mean swapping between three hundreds processes, which doesn't seem great anyway!? Or should I copy my three hundreds files or so in "nb dirs", where "nb dirs" would be the number of cores, then run concurrently "nb finds"? (which would be close enough) But how would I do this?

    Read the article

  • global variables in php not working as expected

    - by Josh Smeaton
    I'm having trouble with global variables in php. I have a $screen var set in one file, which requires another file that calls an initSession() defined in yet another file. The initSession() declares "global $screen" and then processes $screen further down using the value set in the very first script. How is this possible? To make things more confusing, if you try to set $screen again then call the initSession(), it uses the value first used once again. The following code will describe the process. Could someone have a go at explaining this? $screen = "list1.inc"; // From model.php require "controller.php"; // From model.php initSession(); // From controller.php global $screen; // From Include.Session.inc echo $screen; // prints "list1.inc" // From anywhere $screen = "delete1.inc"; // From model2.php require "controller2.php" initSession(); global $screen; echo $screen; // prints "list1.inc" Update: If I declare $screen global again just before requiring the second model, $screen is updated properly for the initSession() method. Strange.

    Read the article

  • C function changes behaviour depending on whether it has a call to printf in it

    - by Daniel
    I have a function that processes some data and finds the threshold that classifies the data with the lowest error. It looks like this: void find_threshold(FeatureVal* fvals, sampledata* data, unsigned int num_samples, double* thresh, double* err, int* pol) { //code to calculate minThresh, minErr, minPol omitted printf("minThresh: %f, minErr: %f, minPol: %d\n", minThresh, minErr, minPol); *thresh = minThresh; *err = minErr; *pol = minPol; } Then in my test file I have this: void test_find_threshold() { //code to set up test data omitted find_threshold(fvals, sdata, 6, &thresh, &err, &pol); printf("Expected 5 got %f\n", thresh); assert(eq(thresh, 5.0)); printf("Expected 1 got %d\n", pol); assert(pol == 1); printf("Expected 0 got %f\n", err); assert(eq(err, 0.0)); } This runs and the test passes with the following output: minThresh: 5.000000, minErr: 0.000000, minPol: 1 Expected 5 got 5.000000 Expected 1 got 1 Expected 0 got 0.000000 However if I remove the call to printf() from find_threshold, suddenly the test fails! Commenting out the asserts so that I can see what gets returned, the output is: Expected 5 got -15.000000 Expected 1 got -1 Expected 0 got 0.333333 I cannot make any sense of this whatsoever.

    Read the article

  • Any HTTP proxies with explicit, configurable support for request/response buffering and delayed conn

    - by Carlos Carrasco
    When dealing with mobile clients it is very common to have multisecond delays during the transmission of HTTP requests. If you are serving pages or services out of a prefork Apache the child processes will be tied up for seconds serving a single mobile client, even if your app server logic is done in 5ms. I am looking for a HTTP server, balancer or proxy server that supports the following: A request arrives to the proxy. The proxy starts buffering in RAM or in disk the request, including headers and POST/PUT bodies. The proxy DOES NOT open a connection to the backend server. This is probably the most important part. The proxy server stops buffering the request when: A size limit has been reached (say, 4KB), or The request has been received completely, headers and body Only now, with (part of) the request in memory, a connection is opened to the backend and the request is relayed. The backend sends back the response. Again the proxy server starts buffering it immediately (up to a more generous size, say 64KB.) Since the proxy has a big enough buffer the backend response is stored completely in the proxy server in a matter of miliseconds, and the backend process/thread is free to process more requests. The backend connection is immediately closed. The proxy sends back the response to the mobile client, as fast or as slow as it is capable of, without having a connection to the backend tying up resources. I am fairly sure you can do 4-6 with Squid, and nginx appears to support 1-3 (and looks like fairly unique in this respect). My question is: is there any proxy server that empathizes these buffering and not-opening-connections-until-ready capabilities? Maybe there is just a bit of Apache config-fu that makes this buffering behaviour trivial? Any of them that it is not a dinosaur like Squid and that supports a lean single-process, asynchronous, event-based execution model? (Siderant: I would be using nginx but it doesn't support chunked POST bodies, making it useless for serving stuff to mobile clients. Yes cheap 50$ handsets love chunked POSTs... sigh)

    Read the article

  • To what degree should I use Marshal.ReleaseComObject with Excel Interop objects?

    - by DanM
    I've seen several examples where Marshal.ReleaseComObject() is used with Excel Interop objects (i.e., objects from namespace Microsoft.Office.Interop.Excel), but I've seen it used to various degrees. I'm wondering if I can get away with something like this: var application = new ApplicationClass(); try { // do work with application, workbooks, worksheets, cells, etc. } finally { Marashal.ReleaseComObject(application) } Or if I need to release every single object created, as in this method: public void CreateExcelWorkbookWithSingleSheet() { var application = new ApplicationClass(); var workbook = application.Workbooks.Add(_missing); var worksheets = workbook.Worksheets; for (var worksheetIndex = 1; worksheetIndex < worksheets.Count; worksheetIndex++) { var worksheet = (WorksheetClass)worksheets[worksheetIndex]; worksheet.Delete(); Marshal.ReleaseComObject(worksheet); } workbook.SaveAs( WorkbookPath, _missing, _missing, _missing, _missing, _missing, XlSaveAsAccessMode.xlExclusive, _missing, _missing, _missing, _missing, _missing); workbook.Close(true, _missing, _missing); application.Quit(); Marshal.ReleaseComObject(worksheets); Marshal.ReleaseComObject(workbook); Marshal.ReleaseComObject(application); } What prompted me to ask this question is that, being the LINQ devotee I am, I really want to do something like this: var worksheetNames = worksheets.Cast<Worksheet>().Select(ws => ws.Name); ...but I'm concerned I'll end up with memory leaks or ghost processes if I don't release each worksheet (ws) object. Any insight on this would be appreciated.

    Read the article

  • Ajax Form submittion in Google App Engine with jQuery

    - by user271785
    could not figure out why it is not working: i need to send request to server, generate some fragment of html in python with meanCal method, and then want that fragment embedded into submitting html file using calculation method and dynamically shows in dyContent div. all the processes are done by single click on submit button in a form. any suggestions??? thanks in advance. the submitting html: <div id="dyContent" style="height: 200px;"> waiting for user... {{ mgs }} </div> <div id="leturetext"> <form id="mean" method="post" action="/calculation"> <select name="meanselect"> <option value=10>example</option> <option value=11>exercise</option> </select> <input type="button" name="btnMean" value="Check Results" /> </form> </div> <script type="text/javascript"> $(document).ready(function() { //$("#btnMean").live("click", function() { $("#mean").submit(function(){ $.ajax({ type: "POST", cache: false, url: "/meanCal", success: function(html) { $("#dyContent").html(html); } }); return false; }); }); </script> python: class MainHandler(webapp.RequestHandler): def get(self): path = self.request.path if doRender(self, path): return doRender(self,'index.htm') class calculationHandler(webapp.RequestHandler): def post(self): doRender(self, 'Diagnostic_stats.htm', {'mgs' : "refreshed.", }) def get(self): doRender(self, 'Diagnostic_stats.htm') class meanHandler(webapp.RequestHandler): def get(self): global GL index = self.request.get('meanselect'.value) if (index == 10): allData = GL.exampleData dataString = ','.join(map(str, allData)) dataMean = (str)(stats.lmean(allData)) doRender(self, 'Result.htm', { 'dataIn' : dataString, 'MEAN' : "Example Mean is: " + dataMean, }) return else: allData = GL.exerciseData dataString = ','.join(map(str, allData)) dataMean = (str)(stats.lmean(allData)) doRender(self, 'Result.htm', { 'dataIn' : dataString, 'MEAN' : "Exercise Mean is: " + dataMean, }) def main(): global GL GL = GlobalVariables() application = webapp.WSGIApplication( [('/calculation', calculationHandler), ('/meanCal', meanHandler), ('.*', MainHandler), ], debug=True) wsgiref.handlers.CGIHandler().run(application) if __name__ == '__main__': main()

    Read the article

  • How to recursive rake? -- or suitable alternatives

    - by TerryP
    I want my projects top level Rakefile to build things using rakefiles deeper in the tree; i.e. the top level rakefile says how to build the project (big picture) and the lower level ones build a specific module (local picture). There is of course a shared set of configuration for the minute details of doing that whenever it can be shared between tasks: so it is mostly about keeping the descriptions of what needs building, as close to the sources being built. E.g. /Source/Module/code.foo and cie should be built using the instructions in /Source/Module/Rakefile; and /Rakefile understands the dependencies between modules. I don't care if it uses multiple rake processes (ala recursive make), or just creates separate build environments. Either way it should be self-containable enough to be processed by a queue: so that non-dependent modules could be built simultaneously. The problem is, how the heck do you actually do something like that with Rake!? I haven't been able to find anything meaningful on the Internet, nor in the documentation. I tried creating a new Rake::Application object and setting it up, but whatever methods I try invoking, only exceptions or "Don't know how to build task ':default'" errors get thrown. (Yes, all rakefiles have a :default). Obviously one could just execute 'rake' in a sub directory for a :modulename task, but that would ditch the options given to the top level; e.g. think of $(MAKE) and $(MAKEFLAGS). Anyone have a clue on how to properly do something like a recursive rake?

    Read the article

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • "Scheduling restart of crashed service", but no call to onStart() follows

    - by kostmo
    In the 1.6 API, is there a way to ensure that the onStart() method of a Service is called after the service is killed due to memory pressure? From the logs, it seems that the "process" that the service belongs to is restarted, but the service itself is not. I have placed a Log.d() call in the onStart() method, and this is not reached. To test my service under memory pressure, I spawn it from an activity, then launch the web browser and visit some Javascript-heavy websites like Slashdot until my service is killed. The logcat reads: 03-07 16:44:13.778: INFO/ActivityManager(52): Process com.kostmo.charbuilder.full (pid 2909) has died. 03-07 16:44:13.778: WARN/ActivityManager(52): Scheduling restart of crashed service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService in 5000ms 03-07 16:44:13.778: INFO/ActivityManager(52): Low Memory: No more background processes. 03-07 16:44:13.778: ERROR/ActivityThread(52): Failed to find provider info for android.server.checkin 03-07 16:44:13.778: WARN/Checkin(52): Can't log event SYSTEM_SERVICE_LOOPING: java.lang.IllegalArgumentException: Unknown URL content://android.server.checkin/events 03-07 16:44:18.908: INFO/ActivityManager(52): Start proc com.kostmo.charbuilder.full for service com.kostmo.charbuilder.full/com.kostmo.charbuilder.DownloadImagesService: pid=3560 uid=10027 gids={3003, 1015} 03-07 16:44:19.868: DEBUG/ddm-heap(3560): Got feature list request 03-07 16:44:20.128: INFO/ActivityThread(3560): Publishing provider com.kostmo.charbuilder.full.provider.character: com.kostmo.charbuilder.provider.ImageFileContentProvider

    Read the article

  • Using pipes in Linux with C

    - by Dave
    Hi, I'm doing a course in Operating Systems and we're supposed to learn how to use pipes to transfer data between processes. We were given this simple piece of code which demonstrates how to use pipes,but I'm having difficulty understanding it. #include <stdio.h> #include <stdlib.h> #include <unistd.h> main() { int pipefd [2], n; char buff[100] ; if( pipe( pipefd) < 0) { printf("can not create pipe \n"); } printf("read fd = %d, write fd = %d \n", pipefd[0], pipefd[1]); if ( write (pipefd[1],"hello world\n", 12)!= 12) { printf("pipe write error \n"); } if( ( n = read ( pipefd[0] , buff, sizeof ( buff) ) ) <= 0 ) { printf("pipe read error \n"); } write ( 1, buff, n ) ; exit (0); } What does the write function do? It seems to send data to the pipe and also print it to the screen (at least it seems like the second time the write function is called it does this). Does anyone have any suggestions of good websites for learning about topics such as this, FIFO, signals, other basic linux commands used in C?

    Read the article

  • twisted deferred/callbacks and asynchronous execution

    - by NetSkay
    hey guys, quick question about twisted and python... im trying to figure out how can i make my code more asynchronous using twisted and ive come to sort of a dead end, if a function of mine returns a deferred object, then i add a list of callbacks, the first callback will be called after the deferred function provides some result through deferred_obj.callback, then, in the chain of callbacks, the first callback will do something with the data and call the second callback and etc. however chained callbacks will not be considered asynchronous because they're chained and the event loop will keep firing each one of them concurrently until there is no more, right? however, if i have a deferred object, and i attach as its callback the deferred_obj.callback as in d.addCallback(deferred_obj.callback) then this will be considered asynchronous, because the deferred_obj is waiting for the data, and then the method that will pass the data is waiting on data as well, however once i d.callback 'd' object processes the data then it call deferred_obj.callback however since this object is deferred, unlike the case of chained callbacks, it will execute asynchronously... correct? meaning chained callbacks are NOT asynchronous while chained deferreds are, correct? thank you PS: assuming all of my code is non-blocking

    Read the article

  • phppgadmin : How does it kick users out of postgres, so it can db_drop?

    - by egarcia
    I've got one Posgresql database (I'm the owner) and I'd like to drop it and re-create it from a dump. Problem is, there're a couple applications (two websites, rails and perl) that access the db regularly. So I get a "database is being accessed by other users" error. I've read that one possibility is getting the pids of the processes involved and killing them individually. I'd like to do something cleaner, if possible. Phppgadmin seems to do what I want: I am able to drop schemas using its web interface, even when the websites are on, without getting errors. So I'm investigating how its code works. However, I'm no PHP expert. I'm trying to understand the phppgadmin code in order to see how it does it. I found out a line (257 in Schemas.php) where it says: $data->dropSchema(...) $data is a global variable and I could not find where it is defined. Any pointers would be greatly appreciated.

    Read the article

  • Help, my CentOS servers keep going down , No route to host after a random uptime [closed]

    - by user249071
    Hello , I have a couple of Centos linux servers, that have a very simple task, they run nginx + fastcgi for php , and some NFS mounts between them, readonly They have some RPC commands to start some downloading processes with wget, nothing fancy , from a main server, but their behavior is very unstable, they simply go down, we tried to monitor ram , processor usage, even network connections, they don't load up so much, max network connections up to... 250 max, 15% processor usage and memory , well, doesn't even fill up, 2.5GB from 8GB max , I have no ideea why can a linux server go down like that, they aren't even public servers, no domain names installed no public serving, for sites. The only thing that I've discovered was that if i didn't restart the network service every couple of hours or so... the servers were becoming very slow, starting apps very slow, but not repoting a high usage of resources...Maybe Centos doesn't free the timeout connections, or something like that...It's based on Red Hat right? I'm not a linux expert , but I'm sure that there are a few guys out there that can easily have an answer to this , or even have some leads to what i can do ... I haven't installed snort, or other things to view if we have some DOS attacks, still the scheduled script that restarts the network each hour should put the system back online, and it doesn't.... Thank you in advance

    Read the article

  • Throttling CPU/Memory usage of a Thread in Java?

    - by Nalandial
    I'm writing an application that will have multiple threads running, and want to throttle the CPU/memory usage of those threads. There is a similar question for C++, but I want to try and avoid using C++ and JNI if possible. I realize this might not be possible using a higher level language, but I'm curious to see if anyone has any ideas. EDIT: Added a bounty; I'd like some really good, well thought out ideas on this. EDIT 2: The situation I need this for is executing other people's code on my server. Basically it is completely arbitrary code, with the only guarantee being that there will be a main method on the class file. Currently, multiple completely disparate classes, which are loaded in at runtime, are executing concurrently as separate threads. I inherited this code (the original author is gone). The way it's written, it would be a pain to refactor to create separate processes for each class that gets executed. If that's the only good way to limit memory usage via the VM arguments, then so be it. But I'd like to know if there's a way to do it with threads. Even as a separate process, I'd like to be able to somehow limit its CPU usage, since as I mentioned earlier, several of these will be executing at once. I don't want an infinite loop to hog up all the resources. EDIT 3: An easy way to approximate object size is with java's Instrumentation classes; specifically, the getObjectSize method. Note that there is some special setup needed to use this tool.

    Read the article

  • DVCS with a Windows central repository

    - by Mikko Rantanen
    We are currently using VSS for version control. Quite few of our developers are interested in a distributed model (And want to get rid of VSS). Our network is full of Windows machines and while our IT department has experience maintaining Linux machines they would prefer not to. What DVCS systems can host their central repository on Windows while providing.. Push access to the repository. Basic authentication. Mostly just a way to allow or deny access to the whole repository. No need for fine grained access. Server process so users don't need write right to the repository reducing the risk of accidentally messing with it. On the client side a GUI such as Tortoise would be more or less a requirement (Sorry, Windows shell sucks. :|). Ease of installation would be a huge plus as our IT department is already quite low on resources. And using windows credentials for authentication would be an advantage but not a requirement as long as the client is able to store the credentials. I have had a (really) quick look at Git, Mercurial and Bazaar. Git seemed to use ssh or simple WebDAV for repository access, requiring write permission for the users. Mercurial had a built in http server, but this seemed to be only for pull purposes. Update: Mercurial supports push as well. Bazaar Seemed to use sftp for repository access, again requiring a write permission for the users. Are there windows server processes for any DVCS systems and has anyone managed to set one up in a Windows land? And apologies if this is a duplicate question. I couldn't find one. Update Got Mercurial working for push purposes! Detailed list what was required can be found as an answer below.

    Read the article

  • How to restrict access to a class's data based on state?

    - by Marcus Swope
    In an ETL application I am working on, we have three basic processes: Validate and parse an XML file of customer information from a third party Match values received in the file to values in our system Load customer data in our system The issue here is that we may need to display the customer information from any or all of the above states to an internal user and there is data in our customer class that will never be populated before the values have been matched in our system (step 2). For this reason, I would like to have the values not even be available to be accessed when the customer is in this state, and I would like to have to avoid some repeated logic everywhere like: if (customer.IsMatched) DisplayTextOnWeb(customer.SomeMatchedValue); My first thought for this was to add a couple interfaces on top of Customer that would only expose the properties and behaviors of the current state, and then only deal with those interfaces. The problem with this approach is that there seems to be no good way to move from an ICustomerWithNoMatchedValues to an ICustomerWithMatchedValues without doing direct casts, etc... (or at least I can't find one). I can't be the first to have come across this, how do you normally approach this? As a last caveat, I would like for this solution to play nice with FluentNHibernate :) Thanks in advance...

    Read the article

  • Mysql Server Optimization

    - by Ish Kumar
    Hi Geeks, We are having serious MySQL(InnoDB) performance issues at a moment when we do: (10-20) insertions on TABLE1 (10-20) updates on TABLE2 Note: Both above operations happens within fraction of a second. And this occurs every few (10-15) minutes. And all online users (approx 400-600) doing read operation on join of TABLE1 & TABLE2 every 1 second. Here is our mysql configuration info: http://docs.google.com/View?id=dfrswh7c_117fmgcmb44 Issues: Lot queries wait and expire later (saw it from phpmyadmin / processes). My poor MySQL server crashes sometimes Questions Q1: Any suggestions to optimize at MySQL level? Q2: I thinking to use persistent connections at application level, is it right? Info Added Later: Database Engine: InnoDB TABLE1 : 400,000 rows (inserting 8,000 daily) & TABLE2: 8,000 rows 1 second query: SELECT b.id, b.user_id, b.description, b.debit, b.created, b.price, u.username, u.email, u.mobile FROM TABLE1 b, TABLE2 u WHERE b.credit = 0 AND b.user_id = u.id AND b.auction_id = "12345" ORDER BY b.id DESC LIMIT 10; // there are few more but they are not so critical. Indexing is good, we are using them wisely. In above query all id's are indexed And TABLE1 has frequent insertions and TABLE2 has frequent updates.

    Read the article

  • Python access an object byref / Need tagging

    - by Aaron C. de Bruyn
    I need to suck data from stdin and create a object. The incoming data is between 5 and 10 lines long. Each line has a process number and either an IP address or a hash. For example: pid=123 ip=192.168.0.1 - some data pid=123 hash=ABCDEF0123 - more data hash=ABCDEF123 - More data ip=192.168.0.1 - even more data I need to put this data into a class like: class MyData(): pid = None hash = None ip = None lines = [] I need to be able to look up the object by IP, HASH, or PID. The tough part is that there are multiple streams of data intermixed coming from stdin. (There could be hundreds or thousands of processes writing data at the same time.) I have regular expressions pulling out the PID, IP, and HASH that I need, but how can I access the object by any of those values? My thought was to do something like this: myarray = {} for each line in sys.stdin.readlines(): if pid and ip: #If we can get a PID out of the line myarray[pid] = MyData().pid = pid #Create a new MyData object, assign the PID, and stick it in myarray accessible by PID. myarray[pid].ip = ip #Add the IP address to the new object myarray[pid].lines.append(data) #Append the data myarray[ip] = myarray[pid] #Take the object by PID and create a key from the IP. <snip>do something similar for pid and hash, hash and ip, etc...</snip> This gives my an array with two keys (a PID and an IP) and they both point to the same object. But on the next iteration of the loop, if I find (for example) an IP and HASH and do: myarray[hash] = myarray[ip] The following is False: myarray[hash] == myarray[ip] Hopefully that was clear. I hate to admit that waaay back in the VB days, I remember being able handle objects byref instead of byval. Is there something similar in Python? Or am I just approaching this wrong?

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • Load Spikes on a Apache MySQL Server with Wordpress MU

    - by Vikram Goyal
    Hi there, I am trying to investigate the reasons for some mysterious load spikes on a Linux Apache server (2.2.14) running PHP 5.2.9 on a dedicated server with enough processing power and memory. My primary web application is a Wordpress MU (2.9.2) installation. I have investigated and ruled out DOS attack, MySQL or Apache configuration issues. The log files don't give me anything of interest, except to tell me that there is severe load. The load (which can go up to 100) just seems to come and go. It helps that I have a script that checks every 3 minutes for the load, and restarts Apache. Restarting it helps, and the server comes back, till it happens again. There seems to be no set time frame, or visitor numbers on the site that can trigger this. Even a low number of concurrent visitors (20) can trigger it. I am almost convinced that there is a rewrite loop somewhere that is causing Apache to go mad. Apache is trying to serve something that is causing it to spawn more and more processes till it keels over. My question is: Given that I am convinced that this is a rewrite issue or something similar, how can I try and figure out what the issue is? What should I monitor? Apache logs are voluminous, and not very helpful. Of course, if this is not the issue, then at least knowing what to look for will help me eliminate this as an issue and look for something else. Thanks! Vikram

    Read the article

  • DataReader-DataSet Hybrid solution

    - by G33kKahuna
    My solution architects and I have exhausted both pure Dataset and Datareader solutions. Basically we have a Microsoft.NET 2.0 windows service application that pulls data based on a query and processes additional tasks per record; almost a poor mans workflow system. The recordsets are broader (in terms of the columns) and deeper (in terms of number of records). We observed that DataSet performs much better in terms of performance but runs into contraints as # of records increase say 100K+ we start seeing System.OutOfMemoryException on a 4G machine with processModel configured to run at memoryLimit set to 85. Since this is a multi-threaded app, there could be multiple threads processing different queries and building different DataSets, so we run into the exception sooner in that case DataReader on the other hand works but is a lot slower and hits other contraints; if there is some sort of disconnect it has to start over again or leaves open connections on the DB side and worst case takes down the service completely etc. So, we decided the best option would be some sort of hybrid solution. I'm open to guidance and suggestions. Are there any hybrid solutions available? Any other suggestions

    Read the article

  • Copy a multi-dimentional array by Value (not by reference) in PHP.

    - by Simon R
    Language: PHP I have a form which asks users for their educational details, course details and technical details. When the form is submitted the page goes to a different page to run processes on the information and save parts to a database. HOWEVER(!) I then need to return the page back to the original page, where having access to the original post information is needed. I thought it would be simple to copy (by value) the multi-dimensional (md) $_POST array to $_SESSION['post'] session_start(); $_SESSION['post'] = $_POST; However this only appears to place the top level of the array into $_SESSION['post'] not doing anything with the children/sub-arrays. An abbreviated form of the md $_POST array is as follows: Array ( [formid] = 2 [edu] = Array ( ['id'] = Array ( [1] = new_1 [2] = new_2 ) ['nameOfInstitution'] = Array ( [1] = 1 [2] = 2 ) ['qualification'] = Array ( [1] = blah [2] = blah ) ['grade'] = Array ( [1] = blah [2] = blah ) ) [vID] = 61 [Submit] = Save and Continue ) If I echo $_SESSION['post']['formid'] it writes "2", and if I echo $_SESSION['post']['edu'] it returns "Array". If I check that edu is an array (is_array($_SESSION['post']['edu])) it returns true. If I echo $_SESSION['post']['edu']['id'] it returns array, but when checked (is_array($_SESSION['post']['edu]['id'])) it returns false and I cannot echo out any of the elements. How do I successfully copy (by value, not by reference) the whole array (including its children) to an new array?

    Read the article

  • How can I validate/secure/authenticate a JavaScript-based POST request?

    - by Bungle
    A product I'm helping to develop will basically work like this: A Web publisher creates a new page on their site that includes a <script> from our server. When a visitor reaches that new page, that <script> gathers the text content of the page and sends it to our server via a POST request (cross-domain, using a <form> inside of an <iframe>). Our server processes the text content and returns a response (via JSONP) that includes an HTML fragment listing links to related content around the Web. This response is cached and served to subsequent visitors until we receive another POST request with text content from the same URL, at which point we regenerate a "fresh" response. These POSTs only happen when our cached TTL expires, at which point the server signifies that and prompts the <script> on the page to gather and POST the text content again. The problem is that this system seems inherently insecure. In theory, anyone could spoof the HTTP POST request (including the referer header, so we couldn't just check for that) that sends a page's content to our server. This could include any text content, which we would then use to generate the related content links for that page. The primary difficulty in making this secure is that our JavaScript is publicly visible. We can't use any kind of private key or other cryptic identifier or pattern because that won't be secret. Ideally, we need a method that somehow verifies that a POST request corresponding to a particular Web page is authentic. We can't just scrape the Web page and compare the content with what's been POSTed, since the purpose of having JavaScript submit the content is that it may be behind a login system. Any ideas? I hope I've explained the problem well enough. Thanks in advance for any suggestions.

    Read the article

  • How do I use the 7-zip LZMA SDK 9.x to self-extract?

    - by Christopher
    I am writing a SFX for an installer. I have a number of good reasons for doing this, primarily: The installer is actually a large Python program which uses plugins. Using py2exe or pyinstaller makes doing plugins annoyingly complicated. I want to be able to pass command-line options directly to the Python installer script, as if it were getting run directly. Using the existing 7-zip SFX modules is clunky because I cannot pass command-line options directly into the processes I want to start. I need more flexibility than any of the existing SFX modules I have seen provide. I have already tried using the SDK to open the file, seek to the 7z archive signature, and run the decompression from there. That fails because the SzArEx_Open() call appears to assume that you are starting at a 0 offset in the file. I am using the File_Seek() call to perform the seeking. It seems like there must be a way to do this, since the 7z archive format itself supports multiple embedded streams. Any pointers to examples would be awesome, but narrative explanation is also quite welcome!

    Read the article

< Previous Page | 141 142 143 144 145 146 147 148 149 150 151 152  | Next Page >