Search Results

Search found 5809 results on 233 pages for 'curl multi'.

Page 176/233 | < Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >

  • Most efficient way of checking for a return from a function call in Perl

    - by Gaurav Dadhania
    I want to add the return value from the function call to an array iff something is returned (not by default, i.e. if I have a return statement in the subroutine.) so I'm using unshift @{$errors}, "HashValidator::$vfunction($hashref)"; but this actually adds the string of the function call to the array. I also tried unshift @{$errors}, $temp if defined my $temp = "HashValidator::$vfunction($hashref)"; with the same result. What would a perl one-liner look like that does this efficiently (I know I can do the ugly, multi-line check but I want to learn). Thanks,

    Read the article

  • How to pass around event as parameter in c#

    - by Jerry Liu
    Am writing unit test for a multi-threading application, where I need to wait until a specific event triggered so that I know the asyn operation is done. E.g. When I call repository.add(something), I wait for event AfterChange before doing any assertion. So I write a util function to do that. public static void SyncAction(EventHandler event_, Action action_) { var signal = new object(); EventHandler callback = null; callback = new EventHandler((s, e) => { lock (signal) { Monitor.Pulse(signal); } event_ -= callback; }); event_ += callback; lock (signal) { action_(); Assert.IsTrue(Monitor.Wait(signal, 10000)); } } However, the compiler prevents from passing event out of the class. Is there a way to achieve that?

    Read the article

  • Send file FTP over SSL with custom port number

    - by JM4
    I have asked the question before but in a different manner. I am trying taking form data, compiling into a temporary CSV file and trying to send over to a client via FTP over SSL (this is the only route I am interested in hearing solutions for unless there is a workaround to doing this, I cannot make changes). I have tried the following: ftp_connect - nothing happens, the page just times out ftp_ssl_connect - nothing happens, the page just times out curl library - same thing, given URL it also gives error. I am given the following information: FTPS Server IP Address TCP Port (1234) Username Password Data Directory to dump file FTP Mode: Passive very, very basic code (which I believe should initiate a connection at minimum): Code: <?php $ftp_server = "00.000.00.000"; //masked for security $ftp_port = "1234"; // masked but not 990 $ftp_user_name = "username"; $ftp_user_pass = "password"; // set up basic ssl connection $conn_id = ftp_ssl_connect($ftp_server, $ftp_port, "20"); // login with username and password $login_result = ftp_login($conn_id, $ftp_user_name, $ftp_user_pass); echo ftp_pwd($conn_id); // / echo "hello"; // close the ssl connection ftp_close($conn_id); ?> When I run this over a SmartFTP client, everything works just fine. I just can't get it to work using PHP (which is a necessity). Has anybody had success doing this in the past? I would be very interested to hear your approach.

    Read the article

  • Can in-memory SQLite databases be used concurrently?

    - by Kent Boogaart
    In order to prevent a SQLite in-memory database from being cleaned up, one must use the same connection to access the database. However, using the same connection causes SQLite to synchronize access to the database. Thus, if I have many threads performing reads against an in-memory database, it is slower on a multi-core machine than the exact same code running against a file-backed database. Is there any way to get the best of both worlds? That is, an in-memory database that permits multiple, concurrent calls to the database?

    Read the article

  • Why hasn't functional programming taken over yet?

    - by pankrax
    I've read some texts about declarative/functional programming (languages), tried out Haskell as well as written one myself. From what I've seen, functional programming has several advantages over the classical imperative style: Stateless programs; No side effects Concurrency; Plays extremely nice with the rising multi-core technology Programs are usually shorter and in some cases easier to read Productivity goes up (example: Erlang) Imperative programming is a very old paradigm (as far as I know) and possibly not suitable for the 21st century Why are companies using or programs written in functional languages still so "rare"? Why, when looking at the advantages of functional programming, are we still using imperative programming languages? Maybe it was too early for it in 1990, but today?

    Read the article

  • .NET Multipage Tiff with Lossy Compression

    - by Adam Berent
    I need a way to take several jpgs and convert them into a single multi page Tiff. I have that working using GDI+ however it only works with the compression LZW which is lossless. This means that my 3 50KB Jpgs turn into 3MB multipage Tiff file. This is not something I can accept for the software that I am working on. I know that Tiff Image format can use a JPG compression scheme but GDI+ does not seem to support this. If anyone knows how to do this in .NET (C#) or of any component that does this conversion.

    Read the article

  • Can I Use ASP.NET Wizard Control to Insert Data into Multiple Tables?

    - by SidC
    Hello All, I have an ASP.NET 3.5 webforms project written in VB that involves a multi-table SQL Server insert. That is, I want the customer to input all their contact information, order details etc. into one control (thinking wizard control). Then, I want to call a stored procedure that does the insert into the respective database tables. I'm familiar and comfortable with the ASP.NET wizard control. However, all the examples I've seen in my searches pertain to inserting data into one table. Questions: 1. Given a typical order process - customer information, order information, order details - should a wizard control be used to insert data into multiple database tables? If not, what controls/workflow do you suggest? 2. I've set primary keys and indexes on my order details, orders and customers tables. Is there special stored procedure syntax to use to ensure that referential integrity is maintained through the insert process? Thanks, Sid

    Read the article

  • CRT not initialized

    - by jfhs
    I'm trying to compile one project with MSVC 2010, compilation is ok, but when I try to run the app, it gives me CRT not initialized error. It is a console application, so I tried to specify mainCRTStartup as Entry Point, but it didn't help. In the same solution there are other projects, and they don't have such a problem. The difference which I see between them is that one which is not working, uses boost. Boost v1.38.0 if this is important. Runtime Library is Multi-threaded DLL. Linker command line is: /OUT:"D:\temp\ghost\Release\ghost.exe" /INCREMENTAL:NO /NOLOGO /LIBPATH:"..\zlib\lib" /LIBPATH:"..\mysql\lib\opt" /LIBPATH:"..\boost\lib" "ws2_32.lib" "winmm.lib" "zdll.lib" "StormLibRAS.lib" "kernel32.lib" "user32.lib" "gdi32.lib" "winspool.lib" "comdlg32.lib" "advapi32.lib" "shell32.lib" "ole32.lib" "oleaut32.lib" "uuid.lib" "odbc32.lib" "odbccp32.lib" "D:\temp\ghost\bncsutil\vc8_build\Release\BNCSutil.lib" /MANIFEST /ManifestFile:"Release\ghost.exe.intermediate.manifest" /ALLOWISOLATION /MANIFESTUAC:"level='asInvoker' uiAccess='false'" /DEBUG /PDB:"D:\temp\ghost\Release\ghost.pdb" /SUBSYSTEM:CONSOLE /OPT:REF /OPT:ICF /PGD:"D:\temp\ghost\Release\ghost.pgd" /LTCG /TLBID:1 /ENTRY:"mainCRTStartup" /DYNAMICBASE /NXCOMPAT /MACHINE:X86 /ERRORREPORT:QUEUE

    Read the article

  • callin' c from lua crashs while reallocating

    - by mkind
    hi folks, i got a crazy error within that for-loop matr=realloc(matr, newmax*sizeof(matr*)); for (i=0; i<newmax; i++){ matr[i]=realloc(matr[i], newmax*sizeof(int)); } matr is a multi-dimension array: int **matr. i need to resize column and row. first line resizes column and the for-loop resizes every row. it worked fine in c. now im working on a library for lua and it crashs here. compilin' works fine as well. but calling from lua crashs with lua: malloc.c:3552: mremap_chunk: Assertion `((size + offset) & (mp_.pagesize-1)) == 0' failed. i have no damn idea since it's working fine using it in c.

    Read the article

  • Are C++ Reads and Writes of an int atomic

    - by theschmitzer
    I have two threads, one updating an int and one reading it. This value is a statistic where the order of the read and write is irrelevant. My question is, do I need to synchronize access to this multi-byte value anyway? Or, put another way, can part of the write be complete and get interrupted, and then the read happen. For example, think of value = ox0000FFFF increment value to 0x00010000 Is there a time where the value looks like 0x0001FFFF that I should be worried about? Certainly the larger the type, the more possible something like this is I've always synchronized these types of accesses, but was curious what the community thought.

    Read the article

  • "too many threads error" in blackberry OS-4.5

    - by SWATI
    hi in my application i have 20 icons(bitmap fields) on the home screen When i click on any icon an HTTP request is made in a separate thread. I have used invoke later method wherever necessary to take care of multi-threading problems. But still the number of threads goes beyond 16 and an error pops up indicating too many threads error and applications needs to be restarted!! can anybody tell me how to destroy these threads when they are no longer in use. I don't understand why they don't destroy on their own as usually they do.

    Read the article

  • 1 bug to kill... Letting PHP Generate The Canonical.

    - by Sam
    Hi folks, for building a clean canonical url, that always returns 1 base URL, im stuck in following case: <?php # every page $extensions = $_SERVER['REQUEST_URI']; # path like: /en/home.ast?ln=ja $qsIndex = strpos($extensions, '?'); # removes the ?ln=de part $pageclean = $qsIndex !== FALSE ? substr($extensions, 0, $qsIndex) : $extensions; $canonical = "http://website.com" . $pageclean; # basic canonical url ?> <html><head><link rel="canonical" href="<?=$canonical?>"></head> when URL : http://website.com/de/home.ext?ln=de canonical: http://website.com/de/home.ext BUT I want to remove the file extension aswell, whether its .php, .ext .inc or whatever two or three char extension .[xx] or .[xxx] so the base url becomes: http://website.com/en/home Aaah much nicer! but How do i achieve that in current code? Any hints are much appreciated +! (other advices for proper canonical usage in this multi-lingual environment are welcome as well)

    Read the article

  • Removing whitespace in Java string?

    - by waitinforatrain
    Hi guys, I'm writing a parser for some LISP files. I'm trying to get rid of leading whitespace in a string. The string contents are along the lines of: :FUNCTION (LAMBDA (DELTA PLASMA-IN-0) (IF (OR (>= #61=(+ (* 1 DELTA) PLASMA-IN-0) 100) (<= #61# 0)) PLASMA-IN-0 #61#)) The tabs are all printed as 4 spaces in the file, so I want to get rid of these leading tabs. I tried to do this: string.replaceAll("\\s{4}", " ") - but it had no effect at all on the string. Does anyone know what I'm doing wrong? Is it because it is a multi-line string? Thanks

    Read the article

  • Visual Studio 2010 compilation general error c1010070

    - by user1747455
    I wrote a little "hello world" program to test my computer, but when i sompile the program there's an error: ------ Build started: Project: hi, Configuration: Debug Win32 ------ Build started 15/10/2012 22:36:48. InitializeBuildStatus: Touching "Debug\hi.unsuccessfulbuild". Link: hi.vcxproj - D:\MSVS\hi\Debug\hi.exe Manifest: Debug\hi.exe.intermediate.manifest : general error c1010070: Failed to load and parse the manifest. {q~ 0H Build FAILED. Time Elapsed 00:00:02.34 ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== the creepiest thing i think would be that line of "OH"... i wonder if there's any way to solve this...please help. Many thanks. edit: i tried changing the character set into "multi-byte" and turning "embed manifest"(from "manifest tools") off, but it still cant solve the error

    Read the article

  • Adding a child node to a JSON node dynamically

    - by Sai
    I have to create a nested multi level json depending on the resultset that I get from MYSQL. I created a json object initially. Now I want to add child nodes to the already child nodes in the object. d = collections.OrderedDict() jsonobj = {"test": dict(updated_at="today", ID="ID", ads=[])} for rows1 in rs: jsonobj['list']["ads"].append(dict(unit = "1", type ="ad_type", id ="123", updated_at="today", x_id="111", x_name="test")) cur.execute("SELECT * from f_test") rs1 = cur.fetchall() for rows2 in rs1: propertiesObj = [] d["name"]="propName" d["type"]="TypeName" d["value"]="Value1" propertiesObj.append(d) jsonobj['play_list']["ads"].append() Here in the above line I want to add another child node to [play_list].[ads] which is a array list again. the output should look like the following [list].[ads].[preferences].

    Read the article

  • How do I post a link to the feed of a page via the Facebook Graph API *as* the page?

    - by jsdalton
    I'm working on a plugin for a Wordpress blog that posts a link to every article published to a Facebook Page associated with the blog. I'm using the Graph API and I have authenticated myself, for the time being, via OAuth. I can successfully post a message to the page using curl via a POST request to https://graph.facebook.com/mypageid/feed with e.g. message = "This is a test" and it published the message. The problem is that the message is "from" my user account. I'm an admin on this test page, and when I go to Facebook and post an update from the web, the link comes "from" my page. That's how I'd like this to be set up, because it looks silly if all the shared links are coming from a user account. Is there a way to authenticate myself as a page? Or is there an alternate way to POST to a page feed that doesn't end up being interpreted as a comment from a user? Thanks for any thoughts or suggestions.

    Read the article

  • Add widgets to custom WordPress sidebars on theme activation?

    - by McGirl
    I'm working on a WordPress 3.0 multi-site installation. Each new blog will use the same theme with slight modifications (a custom Thesis install, if it matters). I'm trying to automate the set-up process for each new blog as much as possible. To that end, I'd like to automatically add widgets to my custom sidebars and widget-enabled footers. It'd be even better if the widgets could have pre-set parameters/content, that I or the blog owner could then go into the Widgets panel and edit. I've searched high and low and haven't been able to figure out a way to make this work. Any and all suggestions are welcome. Thanks so much!

    Read the article

  • openmp vs opencl for computer vision

    - by user1235711
    I am creating a computer vision application that detect objects via a web camera. I am currently focusing on the performance of the application My problem is in a part of the application that generates the XML cascade file using Haartraining file. This is very slow and takes about 6days . To get around this problem I decided to use multiprocessing, to minimize the total time to generate Haartraining XML file. I found two solutions: opencl and (openMp and openMPI ) . Now I'm confused about which one to use. I read that opencl is to use multiple cpu and GPU but on the same machine. Is that so? On the other hand OpenMP is for multi-processing and using openmpi we can use multiple CPUs over the network. But OpenMP has no GPU support. Can you please suggest the pros and cons of using either of the libraries.

    Read the article

  • Is multithreading the right way to go for my case?

    - by Julien Lebosquain
    Hello, I'm currently designing a multi-client / server application. I'm using plain good old sockets because WCF or similar technology is not what I need. Let me explain: it isn't the classical case of a client simply calling a service; all clients can 'interact' with each other by sending a packet to the server, which will then do some action, and possible re-dispatch an answer message to one or more clients. Although doable with WCF, the application will get pretty complex with hundreds of different messages. For each connected client, I'm of course using asynchronous methods to send and receive bytes. I've got the messages fully working, everything's fine. Except that for each line of code I'm writing, my head just burns because of multithreading issues. Since there could be around 200 clients connected at the same time, I chose to go the fully multithreaded way: each received message on a socket is immediately processed on the thread pool thread it was received, not on a single consumer thread. Since each client can interact with other clients, and indirectly with shared objects on the server, I must protect almost every object that is mutable. I first went with a ReaderWriterLockSlim for each resource that must be protected, but quickly noticed that there are more writes overall than reads in the server application, and switched to the well-known Monitor to simplify the code. So far, so good. Each resource is protected, I have helper classes that I must use to get a lock and its protected resource, so I can't use an object without getting a lock. Moreover, each client has its own lock that is entered as soon as a packet is received from its socket. It's done to prevent other clients from making changes to the state of this client while it has some messages being processed, which is something that will happen frequently. Now, I don't just need to protect resources from concurrent accesses. I must keep every client in sync with the server for some collections I have. One tricky part that I'm currently struggling with is the following: I have a collection of clients. Each client has its own unique ID. When a client connects, it must receive the IDs of every connected client, and each one of them must be notified of the newcomer's ID. When a client disconnects, every other client must know it so that its ID is no longer valid for them. Every client must always have, at a given time, the same clients collection as the server so that I can assume that everybody knows everybody. This way if I'm sending a message to client #1 telling "Client #2 has done something", I know that it will always be correctly interpreted: Client 1 will never wonder "but who is Client 2 anyway?". My first attempt for handling the connection of a new client (let's call it X) was this pseudo-code (remember that newClient is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("newClient with id X has connected"); } } clients.Add(newClient); newClient.Send("the list of other clients"); } Now imagine that in the same time, another client has sent a packet that translates into a message that must be broadcasted to every connected client, the pseudo-code will be something like this (remember that the current client - let's call it Y - is already locked here): lock (clients) { foreach (var client in clients) { lock (client) { client.Send("something"); } } } An obvious deadlock occurs here: on one thread X is locked, the clients lock has been entered, started looping through the clients, and at one moment must get Y's lock... which is already acquired on the second thread, itself waiting for the clients collection lock to be released! This is not the only case like this in the server application. There are other collections which must be kept in sync with the clients, some properties on a client can be changed by another one, etc. I tried other types of locks, lock-free mechanisms and a bunch of other things. Either there were obvious deadlocks when I'm using too much locks for safety, or obvious race conditions otherwise. When I finally find a good middle point between the two, it usually comes with very subtle race conditions / dead locks and other multi-threading issues... my head hurts very quickly since for any single line of code I'm writing I have to review almost the whole application to ensure everything will behave correctly with any number of threads. So here's my final question: how would you resolve this specific case, the general case, and more importantly: aren't I going the wrong way here? I have little problems with the .NET framework, C#, simple concurrency or algorithms in general. Still, I'm lost here. I know I could use only one thread processing the incoming requests and everything will be fine. However, that won't scale well at all with more clients... But I'm thinking more and more to go this simple way. What do you think? Thanks in advance to you, StackOverflow people which have taken the time to read this huge question. I really had to explain the whole context if I want to get some help.

    Read the article

  • SQL Compact Edition 3.5 SP 1 - LockTimeOutException - how to debug?

    - by Bob King
    Intermittently in our app, we encounter LockTimeoutExceptions being throw from SQL CE. We've recently upgraded to 3.5 SP 1, and a number of them seem to have gone away, but we still do see them occasionally. I'm certain it's a bug in our code (which is multi-threaded) but I haven't been able to pin it down precisely. Does anyone have any good techniques for debugging this problem? The exceptions log like this (there's never a stack trace for these exceptions): SQL Server Compact timed out waiting for a lock. The default lock time is 2000ms for devices and 5000ms for desktops. The default lock timeout can be increased in the connection string using the ssce: default lock timeout property. [ Session id = 6,Thread id = 7856,Process id = 10116,Table name = Product,Conflict type = s lock (x blocks),Resource = DDL ] Our database is read-heavy, but does seldom writes, and I think I've got everything protected where it needs to be. EDIT: SQL CE already automatically uses NOLOCK http://msdn.microsoft.com/en-us/library/ms172398(sql.90).aspx

    Read the article

  • Error monitoring/handling on webservers

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • Error handling approach on PHP

    - by Industrial
    Hi everybody, We have a web server that we're about to launch a number of applications onto. They will all share database and memcached servers, but each application has it's own mySQL database and all memcached keys per application, is prefixed. Possible scenario: If a memcached server in our cluster goes boom, we want someone (operative system admin) to be automatically contacted by email/iphone push notification or in any other appropriate way. If we we're about to install 150 identical applications for our customers on our servers, and a memcached server dies - all 150 applications will individually find this out and contact our system admin, which most certainly is going to think about getting a new job where he or she isn't about to be woken up by getting 150 messages sent 4:15 in the morning. Possible solution: One idea is to set up an external server for error handling that gets a $_POST or cURL request sent, and handles storage of the error message depending on the seriousness of the actual error message. It would of course check upon receiving the error call, that if the same memcached server have already been reported as offline, there would be no need to spam the system admin with additional reminders... The questions: What's a good approach on how to handle errors? How does the big guys in the industry handle this? Thanks!

    Read the article

  • Maven profile for single module

    - by c0mrade
    I have a multi module maven project, which builds successfully, I'd like to build just one of the modules I have. How would I do that with profiles ? I could do it from console in two ways, one way is go to the child module and mvn package or I could use reactor to build just one module. Can I do the same thing with profiles? By modifying POM? Thank you EDIT If is impossible from POM, can I do it from settings.xml ?

    Read the article

  • Looking for a wiki-style, standalone, version-control-"safe" documenation package

    - by basszero
    This may sound like it's not a programming related question, but stick with me here... My team and I have found that documenting our project (a development platform w/ API) with a wiki is both useful to us and useful to the users. Due to some organizational issues, we're forced to do multi-site development without network connectivity. We've switched to a DVCS (Mercurial) and had great success with this. The wiki documentation proves to be a problem as the central site is setup with MediaWiki. The offsite people have no way to access or edit the wiki. Is there any sort of wiki-style package which doesn't not require a server/database and will be useable in a DVCS environment? Update: Should be open-source and cross-platform

    Read the article

< Previous Page | 172 173 174 175 176 177 178 179 180 181 182 183  | Next Page >