Search Results

Search found 5679 results on 228 pages for 'kill processes'.

Page 164/228 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • Why is this MySQL INSERT INTO running twice?

    - by stuboo
    I'm attempting to use the mysql insert statement below to add information to a database table. When I execute the script, however, the insert statement is run twice. Here's the URL mysite.com/save.php?Body=p220,c180 Thanks in advance. <?php //tipping fees application require('base.inc.php'); require('functions.inc.php'); // connect to the database & save this message there try { $dbh = new PDO("mysql:host=$dbhost;dbname=$dbname", $dbuser, $dbpass); //$number = formatPhone($_REQUEST['From']); //if($number != 'xxx-xxx-xxxx'){die('SMS from unknown number');} // kill this if from anyone but mike $message = $_REQUEST['Body']; //$Sid = $_REQUEST['SmsSid']; $now = time(); echo $message; $message = explode(",",$message); echo '<pre>'; print_r($message); echo 'message count = '.count($message); echo '</pre>'; $i = 0; $j = count($message); while($i<$j){ $quantity =$message[$i]; $material = substr($quantity, 0, 1); $amount = substr($quantity, 1); switch ($material) { case 'p': $m = "paper"; break; case 'c': $m = "containers"; break; default: $m = "other"; } $count = $dbh->exec("INSERT INTO tippingtotals(sid,time,material,weight) VALUES('$i+$j','$now','$m','$amount')"); echo $count; echo '<br />'; $i++; } //close the database connection $dbh = null; } catch(PDOException $e) { echo $e->getMessage(); } ?>

    Read the article

  • a completely decoupled OO system ?

    - by shrini1000
    To make an OO system as decoupled as possible, I'm thinking of the following approach: 1) we run an RMI/directory like service where objects can register and discover each other. They talk to this service through an interface 2) we run a messaging service to which objects can publish messages, and register subscription callbacks. Again, this happens through interfaces 3) when object A wants to invoke a method on object B, it discovers the target object's unique identity through #1 above, and publishes a message on the message service for object B 4) message services invokes B's callback to give it the message 5) B processes the request and sends the response for A on message service 6) A's callback is called and it gets the response. I feel this system is as decoupled as practically possible, but it has the following problems: 1) communication is typically asynchronous 2) hence it's non real time 3) the system as a whole is less efficient. Are there any other practical problems where this design obviously won't be applicable ? What are your thoughts on this design in general ?

    Read the article

  • Mono 2.10.5 Runtime error on Ubuntu 11.10

    - by johnluetke
    I've install mono-runtime via apt in order to run my Mono console application on Ubuntu via SSH. However, when I run the command mono myapp.exe, It exits, with no message, and my program does nothing. If I throw the -v switch to Mono, such as mono -v myapp.exe, I get about 10k lines of output (as expected, -v is verbose), with the first few lines being: converting method System.OutOfMemoryException:.ctor (string) Method System.OutOfMemoryException:.ctor (string) emitted at 0xb7052c28 to 0xb7052c4b (code length 35) [myapp.exe] converting method (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr) Method (wrapper runtime-invoke) <Module>:runtime_invoke_void__this___object (object,intptr,intptr,intptr) emitted at 0xb7052c68 to 0xb7052cf6 (code length 142) [myapp.exe] converting method System.SystemException:.ctor (string) I read this as the runtime throwing an OutOfMemory exception, but the machine is under no intense load, has plenty of available RAM, and is running nothing other that system processes. I've removed and reinstalled Mono countless times, and have even run the executable on other machines perfectly fine. Am I missing something completely obvious here?

    Read the article

  • Terminal-based snake game: input thread manipulates output

    - by enlightened
    I'm writing a snake game for the terminal, i.e. output via print. The following works just fine: while status[snake_monad] do print to_string draw canvas, compose_all([ frame, specs, snake_to_hash(snake[snake_monad]) ]) turn! snake_monad, get_dir move! snake_monad, specs sleep 0.25 end But I don't want the turn!ing to block, of course. So I put it into a new Thread and let it loop: Thread.new do loop do turn! snake_monad, get_dir end end while status[snake_monad] do ... # no turn! here ... end Which also works logically (the snake is turning), but the output is somehow interspersed with newlines. As soon as I kill the input thread (^C) it looks normal again. So why and how does the thread have any effect on my output? And how do I work around this issue? (I don't know much about threads, even less about them in ruby. Input and output concurrently on the same terminal make the matter worse, I guess...) Also (not really important): Wanting my program as pure as possible, would it be somewhat easily possible to get the input non-blockingly while passing everything around? Thank you!

    Read the article

  • PHP Session variable isset(..)=1 after session_start()

    - by Nicsoft
    Hello! I guess I am not understanding the scope of session variables, or the session itself, in PHP, hence this question: This is my code if(!session_id()==""){ echo "Getting rid of session"."</br>"; session_destroy(); } echo "Before session_start(): ".isset($_SESSION["first_date_of_week"])."</br>"; session_start(); echo "After session_start(): ".isset($_SESSION["first_date_of_week"])." ".$_SESSION["first_date_of_week"]->format("Y-m-d")."</br>"; The output is: Before session_start(): After session_start(): 1 2011-01-09 How come that when doing the isset(..) on the session variable it is set directly after starting the session, even though I haven't even used it or set it yet? It does, however, still have the same value as before. Also, session_id()="" since the if-clause is never triggered. I never kill the session, how come it is set to ""? I.e. I refresh the page and expects the session to still be alive. Using the isset(..) function is then pretty useless testing if it has been set already... Thanks in advance! /Niklas

    Read the article

  • Erlang message loops

    - by Roger Alsing
    How does message loops in erlang work, are they sync when it comes to processing messages? As far as I understand, the loop will start by "receive"ing a message and then perform something and hit another iteration of the loop. So that has to be sync? right? If multiple clients send messages to the same message loop, then all those messages are queued and performed one after another, or? To process multiple messages in parallell, you would have to spawn multiple message loops in different processes, right? Or did I misunderstand all of it?

    Read the article

  • thumbs.db messing up my upload routine

    - by Scott B
    I'm getting the following error while uploading a zip archive. Warning: ZipArchive::extractTo(C:\xampplite\htdocs\testsite/wp-content/themes/mytheme//styles\mytheme/Thumbs.db) [ziparchive.extractto]: failed to open stream: Permission denied in C:\xampplite\htdocs\testsite\wp-content\themes\mythem\uploader.php on line 17 The thing I can't quite figure is that I don't see a thumbs.db file in either the zip archive or the destination folder that was created (the upload still processes, I just get these errors). The function is below, line 17 is commented... function openZip($file_to_open) { global $target; $zip = new ZipArchive(); $x = $zip->open($file_to_open); if($x === true) { $zip->extractTo($target); //this is line 17 $zip->close(); unlink($file_to_open); } else { die("There was a problem. Please try again!"); } }

    Read the article

  • What do you do when you feel you need a variadic list comprehension?

    - by cspyr0
    I would like to make a method where I could give it a list of lengths and it would return all combinations of cartesian coordinates up to those lengths. Easier to explain with an example: cart [2,5] Prelude> [ [0,0],[0,1],[0,2],[0,3],[0,4],[1,0],[1,1],[1,2],[1,3],[1,4] ] cart [2,2,2] Prelude> [ [0,0,0],[0,0,1],[0,1,0],[0,1,1],[1,0,0],[1,0,1],[1,1,0],[1,1,1] ] A simple list comprehension won't work because I don't know how long the lists are going to be. While I love Haskell's simplicity for many problems, this is one that I could write procedurally (in C or something) in 5 minutes whereas Haskell gives me an aneurysm! A solution to this specific problem would help me out a lot; I'd also love to hear about your thought processes when tackling stuff like this.

    Read the article

  • On Solaris, what is the difference between cut and gcut?

    - by Chris J
    I recently came across this crazy script bug on one of my Solaris machines. I found that cut on Solaris skips lines from the files that it processes (or at least very large ones - 800 MB in my case). > cut -f 1 test.tsv | wc -l 457030 > gcut -f 1 test.tsv | wc -l 840571 > cut -f 1 test.tsv > temp_cut_1.txt > gcut -f 1 test.tsv > temp_gcut_1.txt > diff temp_cut_1.txt temp_gcut_1.txt | grep '[<]' | wc -l 0 My question is what the hell is going on with Solaris cut? My solution is updating my scripts to use gcut but... what the hell?

    Read the article

  • What's the best Linux backup solution?

    - by Jon Bright
    We have a four Linux boxes (all running Debian or Ubuntu) on our office network. None of these boxes are especially critical and they're all using RAID. To date, I've therefore been doing backups of the boxes by having a cron job upload tarballs containing the contents of /etc, MySQL dumps and other such changing, non-packaged data to a box at our geographically separate hosting centre. I've realised, however that the tarballs are sufficient to rebuild from, but it's certainly not a painless process to do so (I recently tried this out as part of a hardware upgrade of one of the boxes) long-term, the process isn't sustainable. Each of the boxes is currently producing a tarball of a couple of hundred MB each day, 99% of which is the same as the previous day partly due to the size issue, the backup process requires more manual intervention than I want (to find whatever 5GB file is inflating the size of the tarball and kill it) again due to the size issue, I'm leaving stuff out which it would be nice to include - the contents of users' home directories, for example. There's almost nothing of value there that isn't in source control (and these aren't our main dev boxes), but it would be nice to keep them anyway. there must be a better way So, my question is, how should I be doing this properly? The requirements are: needs to be an offsite backup (one of the main things I'm doing here is protecting against fire/whatever) should require as little manual intervention as possible (I'm lazy, and box-herding isn't my main job) should continue to scale with a couple more boxes, slightly more data, etc. preferably free/open source (cost isn't the issue, but especially for backups, openness seems like a good thing) an option to produce some kind of DVD/Blu-Ray/whatever backup from time to time wouldn't be bad My first thought was that this kind of incremental backup was what tar was created for - create a tar file once each month, add incrementally to it. rsync results to remote box. But others probably have better suggestions.

    Read the article

  • C# Windows Service Intermittent Method Call

    - by Goober
    Scenario I have a C# Windows Service that essentially subscribes to some events and if anything is triggered by the events, it carries out a few tasks. The Thing... ....is that these events are monitoring processes, which I need to restart at certain times of the day. Question What's the best way I can go about performing this task at an exact time? Thoughts so far are: 1)To use a timer that checks what time it is every few minutes. 2)Something that isn't a timer and doesn't suck as an implementation. Help greatly appreciated.

    Read the article

  • How do you manage your sqlserver database projects for new builds and migrations?

    - by Rory
    How do you manage your sql server database build/deploy/migrate for visual studio projects? We have a product that includes a reasonable database part (~100 tables, ~500 procs/functions/views), so we need to be able to deploy new databases of the current version as well as upgrade older databases up to the current version. Currently we maintain separate scripts for creation of new databases and migration between versions. Clearly not ideal, but how is anyone else dealing with this? This is complicated for us by having many customers who each have their own db instance, rather than say just having dev/test/live instances on our own web servers, but the processes around managing dev/test/live for others must be similar.

    Read the article

  • The question about the basics of LINQ to SQL working

    - by Alex
    I just started learning LINQ to SQL, and so far I'm impressed with the easy of use and good performance. I used to think that when doing LINQ queries like from Customer in DB.Customers where Customer.Age > 30 select Customer Get all customers from the database ("SELECT * FROM Customers"), move them to the Customers array and then make a search in that Array using .NET methods. This is very inefficient, what if there are hundreds of thousands of customers in the database? Making such big SELECT queries would kill the web application. Now after experiencing how actually fast LINQ to SQL is, I start to suspect that when doing that query I just wrote, LINQ somehow converts it to a SQL Query string SELECT * FROM Customers WHERE Age > 30 And only when necessary it will run the query. So my question is: am I right? And when is the query actually run? The reason why I'm asking is not only because I want to understand how it works in order to build good optimized applications, but because I came across the following problem. I have 2 tables, one of them is Books, the other has information on how many books were sold on certain days. My goal is to select books that had at least 50 sales/day in past 10 days. It's done with this simple query: from Book in DB.Books where (from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID).Contains(Book.ID) select Book The point is, I have to use the checking part in several queries and I decided to create an array with IDs of all popular books: var popularBooksIDs = from Sale in DB.Sales where Sale.SalesAmount >= 50 and Sale.DateOfSale >= DateTime.Now.AddDays(-10) select Sale.BookID; BUT when I try to do the query now: from Book in DB.Books where popularBooksIDs.Contains(Book.ID) select Book It doesn't work! That's why I think that we can't use thins kinds of shortcuts in LINQ to SQL queries, like we can't use them in real SQL. We have to create straightforward queries, am I right?

    Read the article

  • Batch file writing to log then ending process

    - by Andrew Service
    I have a batch file that calls a process and in that process I have: IF %ERRORLEVEL% NEQ 0 EXIT /B %ERRORLEVEL% Now I wanted to upgrade this a bit and give some meaningful message to an output log when if the process fails, also I do not want the main batch to continue processing because the next processes are dependent on data from previous calls; I wonder if this would be correct but not sure: CALL Process 1 IF %ERRORLEVEL% NEQ 0 GOTO ErrorInfirstProcess /B %ERRORLEVEL% :ErrorInfirstProcess ECHO Process 1 Failed on %Date% at %Time%. >>C:\Log.txt" CALL Process 2 IF %ERRORLEVEL% NEQ 0 GOTO ErrorInSecondProcess /B %ERRORLEVEL% :ErrorInSecondProcess ECHO Process 2 Failed on %Date% at %Time%. >>C:\Log.txt" I also want to know if I still need the /B or do I need to put an EXIT command after the echo? Thanks A

    Read the article

  • Access denied exception while accessing process.MainModule.FileName

    - by Manjoor
    I am listing all running processes in system with it full path. My application is running fine in XP but in vista, it gives access denied exception while accessing MainModule.FileName. (Due to UAC, i think). foreach (Process process in Process.GetProcesses()) { sProcess = process.ProcessName; sFullpath = process.MainModule.FileName; .. .. .. } I did not find a solution to deal with UAC. Any clue??

    Read the article

  • Input questions mysql php html

    - by Marcelo
    (Q1)Hi I'm using textbox in my project and I can't receive the values that are typed <textarea rows="5" cols="60"> Type your suggestion </textarea> <br> <input type="submit" name="sugestao" value="Submit" /> Sorry I don't know how to 'kill' html code, that's why < is missing. All I'm getting in the column of the database from this text box is "Submit", I'd like to receive whatever is written in the text area. How can I make the value equal whaterever is typed? (Q2) How can I make sure that I'll only store the same type(int,varchar,text) that I setted,declared in the database. For example: age(int), but if someone types "abc" in the input it will be stored in my database as the value 0 . How can I forbid this, and only save the age when it's just int and all the other fields(like name, email) are filled ?. And if is still possible warn the user that he is typing something wrong, don't need to say where. Sorry for any mistake in English and Thanks for the attention.

    Read the article

  • How to handle set based consistency validation in CQRS?

    - by JD Courtoy
    I have a fairly simple domain model involving a list of Facility aggregate roots. Given that I'm using CQRS and an event-bus to handle events raised from the domain, how could you handle validation on sets? For example, say I have the following requirement: Facility's must have a unique name. Since I'm using an eventually consistent database on the query side, the data in it is not guaranteed to be accurate at the time the event processesor processes the event. For example, a FacilityCreatedEvent is in the query database event processing queue waiting to be processed and written into the database. A new CreateFacilityCommand is sent to the domain to be processed. The domain services query the read database to see if there are any other Facility's registered already with that name, but returns false because the CreateNewFacilityEvent has not yet been processed and written to the store. The new CreateFacilityCommand will now succeed and throw up another FacilityCreatedEvent which would blow up when the event processor tries to write it into the database and finds that another Facility already exists with that name.

    Read the article

  • Seam log4j credential logs

    - by Marc
    In Seam, using log4, I would like to have my info, warn and error always logging the logged in user (if so) name to be logged with whatever the log message is. Being a consistant thing I do not want to have to grab the logged-in user name, and prefix the message. so I attempted to populate the log4j NDC to have it as a field of the log message. Pushing the user name on successful login: NDC.push(credentials.getUsername()); Which works, but the NDC is managed per thread, so once another thread processes a request from the same logged in user, the trace of this user name is lost. I was thinking that there should be a common pattern to accomplish this simple task which is attaching each log message to the logged user, using the NDC or not, to know exactly what user triggered what action. Anyone knows the appropriate way to accomplish this?

    Read the article

  • Aspect Oriented Programming vs List<IAction> To execute methods based on conditions

    - by David Robbins
    I'm new to AOP so bear with me. Consider the following scenario: A state machine is used in a workflow engine, and after the state of the application is changed, a series of commands are executed. Depending on the state, different types of commands should be executed. As I see it, one implementation is to create List<IAction> and have each individual action determine whether it should execute. Would a Aspect Oriented process work as well? That is, could you create an aspect that notifies a class when a property changes, and execute the appropriate processes from that class? Would this help centralize the state specific rules?

    Read the article

  • Best Practices for a Web App Staging Server (on a budget)

    - by fig-gnuton
    I'd like to set up a staging server for a Rails app. I use git & github, Cap, and have a VPS with Apache/Passenger. I'm curious as to the best practices for a staging setup, as far as both the configuration of the staging server as well as the processes for interacting with it. I do know it should be as identical to the production server as possible, but restricting public access to it will limit that, so tips on securing it only for my use would also be great. Another specific question would be whether I could just create a virtual host on the VPS, so that the staging server could reside alongside the production one. I have a feeling there may be reasons to avoid this, though.

    Read the article

  • MPI Large Data all to all transfer

    - by csslayer
    My application of MPI has some process that generate some large data. Say we have N+1 process (one for master control, others are workers), each of worker processes generate large data, which is now simply write to normal file, named file1, file2, ..., fileN. The size of each file may be quite different. Now I need to send all fileM to rank M process to do the next job, So it's just like all to all data transfer. My problem is how should I use MPI API to send these files efficiently? I used to use windows share folder to transfer these before, but I think it's not a good idea. I have think about MPI_file and MPI_All_to_all, but these functions seems not to be so suitable for my case. Simple MPI_Send and MPI_Recv seems hard to be used because every process need to transfer large data, and I don't want to use distributed file system for now.

    Read the article

  • How to write to a Text File in Pipe delimited format from MS Sql Server / ASP.Net?

    - by NJTechGuy
    I have a text file which needs to be constantly updated (regular intervals). All I want is the syntax and possibly some code that outputs data from a MS Sql Database using ASP.Net. The code I have so far is : <%@ Import Namespace="System.IO" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim FILENAME as String = Server.MapPath("Output.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) objStreamWriter.WriteLine("A user viewed this demo at: " & DateTime.Now.ToString()) objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" /> With PHP, it is a breeze, but with .Net I have no clue. If you could help me with the database connectivity and how to write the data in pipe delimited format to an Output.txt file, that had be awesome. Thanks guys!

    Read the article

  • How to tell the parent that the thread is done in C++ using pthreads ?

    - by milleroff
    Hi. I have a TCP Server application that serves each client in a new thread using POSIX Threads and C++. The server calls "listen" on its socket and when a client connects, it makes a new object of class Client. The new object runs in its own thread and processes the client's requests. When a client disconnects, i want some way to tell my main() thread that this thread is done, and main() can delete this object and log something like "Client disconnected". My question is, how do i tell to the main thread, that a thread is done ?

    Read the article

  • TinyMCE refresh issue

    - by Luke
    I'm using TinyMCE, witch is working fine for the most part... when the user saves the page redirects to a a php page that extracts the textfield data and processes it, i wont elaborate cause that's no t the issue... the issue is that when it redirects back to the page... the textfield is blank and i can click on it but not see anything, if i type it enters(but i can't see it), this also happens when i go to the address directly, the only way i can see it is to hit the refresh button... i tried a meta refresh to the page... i tried disabling cache... any ideas would be great... thanks in advance

    Read the article

  • what is the relation between SIGTSTP and SIGCHLD

    - by Rawhi
    I have tow handlers for each one of them (SIGTSTP, SIGCHLD), the thing is that when I pause a process using SIGTSTP the handler function of SIGCHLD run too. what should I do to prevent this . void ExeExternal(char *args[MAX_ARG], char* cmdString, LIST_ELEMENT** pList, int *Susp_Bg_Pid, int *susp) { int pID, status, w; switch (pID = fork()) { case -1: perror("smash error: >"); break; case 0: // Child Process setpgrp(); execv(args[0], args); execvp(args[0], args); perror("error"); exit(EXIT_FAILURE); break; default: if (cmdString[strlen(cmdString) - 1] != '&') { *Susp_Bg_Pid = pID; *susp = 1; while(*susp); } else { InsertElem(pList, args[0], getpid(), pID, 0); } break; } } signal handlers : void signalHandler(int signal) { int pid, cstatus; if (signal == SIGCHLD) { susp = 0; pid = waitpid(-1, &cstatus, WNOHANG); printf("[[child %d terminated]]\n", pid); DelPID(&JobsList, pid); } } void ctrlZsignal(int signal){ kill(Susp_Bg_Pid, SIGTSTP); susp = 0; printf("\nchild %d suspended\n", Susp_Bg_Pid); } Susp_Bg_Pid used to save the paused process id. susp indicates the state of the "smash" the parent process if it is suspended or not .

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >