Search Results

Search found 22000 results on 880 pages for 'worker process'.

Page 323/880 | < Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >

  • Do I lose the benefits of macro recording if I develop Excel apps in Visual Studio?

    - by DanM
    I've written lots of Excel macros in the past using the following development process: Record a macro. Open the VBA editor. Edit the macro. I'm now experimenting with a Visual Studio 2008 "Excel 2007 Add-In" project (C#), and I'm wondering if I will have to give up this development process. Questions: I know I can still record macros using Excel, but is there any way to access the resulting code in Visual Studio? Or do I just have to copy and paste then C#-ize it? What happens with my "Personal Macro Workbook"? Can I use the macros I have stored in there within C#? Or is there some way to convert them to C#? If there is some support for opening and editing VBA macros in Visual Studio, can you provide a very brief summary of how it works or point me to a good reference? Do you have any other tips for transitioning from writing macros in VBA using Excel's built-in editor to writing them in C# with Visual Studio?

    Read the article

  • Detect Application Shutdown in C# NET?

    - by Michael Pfiffer
    I am writing a small console application (will be ran as a service) that basically starts a Java app when it is running, shuts itself down if the Java app closes, and shuts down the Java app if it closes. I think I have the first two working properly, but I don't know how to detect when the .NET application is shutting down so that I can shutdown the Java app prior to that happening. Google search just returns a bunch of stuff about detecting Windows shutting down. Can anyone tell me how I can handle that part and if the rest looks fine? namespace MinecraftDaemon { class Program { public static void LaunchMinecraft(String file, String memoryValue) { String memParams = "-Xmx" + memoryValue + "M" + " -Xms" + memoryValue + "M "; String args = memParams + "-jar " + file + " nogui"; ProcessStartInfo processInfo = new ProcessStartInfo("java.exe", args); processInfo.CreateNoWindow = true; processInfo.UseShellExecute = false; try { using (Process minecraftProcess = Process.Start(processInfo)) { minecraftProcess.WaitForExit(); } } catch { // Log Error } } static void Main(string[] args) { Arguments CommandLine = new Arguments(args); if (CommandLine["file"] != null && CommandLine["memory"] != null) { // Launch the Application LaunchMinecraft(CommandLine["file"], CommandLine["memory"]); } else { LaunchMinecraft("minecraft_server.jar", "1024"); } } } }

    Read the article

  • Understanding ulimit -u

    - by tripleee
    I'd like to understand what's going on here. linvx$ ( ulimit -u 123; /bin/echo nst ) nst linvx$ ( ulimit -u 122; /bin/echo nst ) -bash: fork: Resource temporarily unavailable Terminated linvx$ ( ulimit -u 123; /bin/echo one; /bin/echo two; /bin/echo three ) one two three linvx$ ( ulimit -u 123; /bin/echo one & /bin/echo two & /bin/echo three ) -bash: fork: Resource temporarily unavailable Terminated one I speculate that the first 122 processes are consumed by Bash itself, and that the remaining ulimit governs how many concurrent processes I am allowed to have. The documentation is not very clear on this. Am I missing something? More importantly, for a real-world deployment, how can I know what sort of ulimit is realistic? It's a long-running daemon which spawns worker threads on demand, and reaps them when the load decreases. I've had it spin the server to its death a few times. The most important limit is probably memory, which I have now limited to 200M per process, but I'd like to figure out how I can enforce a limit on the number of children (the program does allow me to configure a maximum, but how do I know there are no bugs in that part of the code?)

    Read the article

  • MPI Odd/Even Compare-Split Deadlock

    - by erebel55
    I'm trying to write an MPI version of a program that runs an odd/even compare-split operation on n randomly generated elements. Process 0 should generated the elements and send nlocal of them to the other processes, (keeping the first nlocal for itself). From here, process 0 should print out it's results after running the CompareSplit algorithm. Then, receive the results from the other processes run of the algorithm. Finally, print out the results that it has just received. I have a large chunk of this already done, but I'm getting a deadlock that I can't seem to fix. I would greatly appreciate any hints that people could give me. Here is my code http://pastie.org/3742474 Right now I'm pretty sure that the deadlock is coming from the Send/Recv at lines 134 and 151. I've tried changing the Send to use "tag" instead of myrank for the tag parameter..but when I did that I just keep getting a "MPI_ERR_TAG: invalid tag" for some reason. Obviously I would also run the algorithm within the processors 0 but I took that part out for now, until I figure out what is going wrong. Any help is appreciated.

    Read the article

  • Memory mapping of files and system cache behavior in WinXP

    - by Canopus
    Our application is memory intensive and deals with reading a large number of disk files. The total load can be more than 3 GB. There is a custom memory manager that uses memory mapped files to achieve reading of such a huge data. The files are mapped into the process memory space only when needed and with this the process memory is well under control. But what is observed is, with memory mapping, the system cache keeps on increasing until it occupies the available physical memory. This leads to the slowing down of the entire system. My question is how to prevent system cache from hogging the physical memory? I attempted to remove the file buffering (by using FILE_FLAG_NO_BUFFERING ), but with this, the read operations take considerable amount of time and slows down the application performance. How to achieve the scalability without sacrificing much on performance. What are the common techniques used in such cases? I dont have a good understanding of the WinXP OS caching behavior. Any good links explaining the same would also be helpful.

    Read the article

  • Execute JavaScript from within a C# assembly

    - by ScottKoon
    I'd like to execute JavaScript code from within a C# assembly and have the results of the JavaScript code returned to the calling C# code. It's easier to define things that I'm not trying to do: I'm not trying to call a JavaScript function on a web page from my code behind. I'm not trying to load a WebBrowser control. I don't want to have the JavaScript perform an AJAX call to a server. What I want to do is write unit tests in JavaScript and have then unit tests output JSON, even plain text would be fine. Then I want to have a generic C# class/executible that can load the file containing the JS, run the JS unit tests, scrap/load the results, and return a pass/fail with details during a post-build task. I think it's possible using the old ActiveX ScriptControl, but it seems like there ought to be a .NET way to do this without using SilverLight, the DLR, or anything else that hasn't shipped yet. Anyone have any ideas? update: From Brad Abrams blog namespace Microsoft.JScript.Vsa { [Obsolete("There is no replacement for this feature. Please see the ICodeCompiler documentation for additional help. http://go.microsoft.com/fwlink/?linkid=14202")] Clarification: We have unit tests for our JavaScript functions that are written in JavaScript using the JSUnit framework. Right now during our build process, we have to manually load a web page and click a button to ensure that all of the JavaScript unit tests pass. I'd like to be able to execute the tests during the post-build process when our automated C# unit tests are run and report the success/failure alongside of out C# unit tests and use them as an indicator as to whether or not the build is broken.

    Read the article

  • Any techniques to interrupt, kill, or otherwise unwind (releasing synchronization locks) a single de

    - by gojomo
    I have a long-running process where, due to a bug, a trivial/expendable thread is deadlocked with a thread which I would like to continue, so that it can perform some final reporting that would be hard to reproduce in another way. Of course, fixing the bug for future runs is the proper ultimate resolution. Of course, any such forced interrupt/kill/stop of any thread is inherently unsafe and likely to cause other unpredictable inconsistencies. (I'm familiar with all the standard warnings and the reasons for them.) But still, since the only alternative is to kill the JVM process and go through a more lengthy procedure which would result in a less-complete final report, messy/deprecated/dangerous/risky/one-time techniques are exactly what I'd like to try. The JVM is Sun's 1.6.0_16 64-bit on Ubuntu, and the expendable thread is waiting-to-lock an object monitor. Can an OS signal directed to an exact thread create an InterruptedException in the expendable thread? Could attaching with gdb, and directly tampering with JVM data or calling JVM procedures allow a forced-release of the object monitor held by the expendable thread? Would a Thread.interrupt() from another thread generate a InterruptedException from the waiting-to-lock frame? (With some effort, I can inject an arbitrary beanshell script into the running system.) Can the deprecated Thread.stop() be sent via JMX or any other remote-injection method? Any ideas appreciated, the more 'dangerous', the better! And, if your suggestion has worked in personal experience in a similar situation, the best!

    Read the article

  • Windows 7: Dual Monitor Issue

    - by sdoca
    I have a Dell laptop with a docking station and two external monitors hooked up. I'm running Windows 7 64 bit. Originally, the two monitors were the same - Dell 1908FP. I have replaced one of them with an HP LA2405. I have set the HP 24" (1920 x 1200 connected via DVI port) as montior 1 and extended to use the Dell 19" (12080 x 1024 connected via VGA port) as monitor 2. I have set my power plan so that when the laptop is plugged in, it will turn the monitors off after 10 minutes, but it will not put my laptop to sleep. However, now that I have the HP monitor, when I unlock my laptop and the monitors come back on, all my windows are resized and shifted to the top left corner of the HP monitor as if they had been resized for display on the smaller Dell monitor. A co-worker has the exact same laptop and monitor configuration but doesn't experience this issue, so I figure there's some configuration that's different but we can't find it. I haven't been able to find any mention of a similar problem doing an internet search, but I'm not really sure what terms to use in my search. Anybody have any suggestions as to what may be causing the issue? OS setting? Monitor setting? EDIT: My laptop is using the Intel Graphics and Media Control Panel/drivers.

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

  • calling plugin inside ajax call function problem

    - by zurna
    I pull categories of one section from an XML. Problem is, pulled categories need to recognize tab plugin so I tried to add tab plugin to CategoryName variable. But it is not working. I get the following error. Error: CategoryName.find is not a function Source File: http://www.refinethetaste.com/FLPM/ Line: 23 Test website: http://www.refinethetaste.com/FLPM/ $.ajax({ dataType: "xml", url: "/FLPM/content/home/index.cs.asp?Process=ViewVCategories", success: function(xml) { $(xml).find('row').each(function(){ var id = $(this).attr('id'); var CategoryName = $(this).find('CategoryName').text(); $("<div class='tab fleft'><a href='http://www.refinethetaste.com/FLPM/content/home/index.cs.asp?Process=ViewVideos&CATEGORYID="+ id +"'>"+ CategoryName + "</a></div>").appendTo("#VCategories"); CategoryName.find("div.row-title .red").tabs("div.panes > div"); }); } }); pulled categories displayed here: <div class="row-title clear" id="VCategories"> categories xml <rows> - <row id="1"> <CategoryName>Nation</CategoryName> </row> - <row id="2"> <CategoryName>Politics</CategoryName> </row> - <row id="3"> <CategoryName>Health</CategoryName> </row> - <row id="4"> <CategoryName>Business</CategoryName> </row> - <row id="5"> <CategoryName>Culture</CategoryName> </row> </rows> </div>

    Read the article

  • Need a Java based interruptible timer thread

    - by LambeauLeap
    I have a Main Program which is running a script on the target device(smart phone) and in a while loop waiting for stdout messages. However in this particular case, some of the heartbeat messages on the stdout could be spaced almost 45secs to a 1minute apart. something like: stream = device.runProgram(RESTORE_LOGS, new String[] {}); stream.flush(); String line = stream.readLine(); while (line.compareTo("") != 0) { reporter.commentOnJob(jobId, line); line = stream.readLine(); } So, I want to be a able to start a new interruptible thread after reading line from stdout with a required a sleep window. Upon being able to read a new line, I want to be able to interrupt/stop(having trouble killing the process), handle the newline of stdout text and restart a process. And it the event I am not able to read a line within the timer window(say 45secs) I want to a way to get out of my while loop either. I already tried the thread.run, thread.interrupt approach. But having trouble killing and starting a new thread. Is this the best way out or am I missing something obvious?

    Read the article

  • Android: How to periodically check current location without draining the battery

    - by uyahalom
    I have a background service which works periodically by timer.scheduleAtFixedRate. It wakes up every amount of time (let's say 60 seconds for example) and checks for the location. The location is checked by locManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 60000, 5, listener); and the actual location is collected from the listener's onLocationChanged. Now, when the phone is outside and GPS reception is good, this works fine. But, if the phone is inside, the GPS is almost always active - looking for a signal, and the battery is drained rapidly. I created another thread using a Handler and a Runnable in order to conrol the GPS active time accurately: I used locManager.requestLocationUpdates(LocationManager.GPS_PROVIDER, 0, 0, listener); and locManager.removeUpdates(listener); so I can open and close the GPS as I want. In this case, I can open the GPS for the exact amount of time, but found out that it doesn't lock in an area with good reception even after 10 seconds. So here I'm draining the battery again... I'm using API level 7, hence I cannot use locationManager.requestSingleUpdate. I have two questions: Is there any way to optimize this process? Will upgrading to API level 9 (and use locationManager.requestSingleUpdate) improve the process significantly? I mean, does it worth upgrading?

    Read the article

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • Clean installation of RHEL 5.5 claims package "desktops" is missing

    - by TKguru42
    Hi all, I'm a student worker in the CS department of my university, so please forgive me for any unprofessional descriptions. Simplified explanations are appreciated. I recently replaced some bad graphics cards in a few public workstations. The machines are all the same model. Before putting them back on the network I did fresh installs of RHEL---first I tried 5.4, but yum update ran into all sorts of ugly dependency errors and if I tried to remove any of the problematic packages, the whole operating system FUBAR'd. Using RHEL 5.5 gave me the same errors during install saying that "java.1.5.1-sun*" and "desktops" were missing, but yum update didn't have any dependency problems. Now that I tried logging in through the GUI, I encounter no GUI past the standard RHEL login page. The desktop is a uniform light teal and there's no system tray. An xclock window and an xterm window are open, and Firefox opens automatically, but that's it. Nothing else. What's REALLY confusing is that the computer claims that gnome is already installed, except it clearly isn't working. Any help or advice is greatly appreciated. If it helps, our department uses kickstart to run our standard Linux installs. I can try to get the script if that would be of use. Thank you!

    Read the article

  • Freeing of allocated memory in Solaris/Linux

    - by user355159
    Hi, I have written a small program and compiled it under Solaris/Linux platform to measure the performance of applying this code to my application. The program is written in such a way, initially using sbrk(0) system call, i have taken base address of the heap region. After that i have allocated an 1.5GB of memory using malloc system call, Then i used memcpy system call to copy 1.5GB of content to the allocated memory area. Then, I freed the allocated memory. After freeing, i used again sbrk(0) system call to view the heap size. This is where i little confused. In solaris, eventhough, i freed the memory allocated (of nearly 1.5GB) the heap size of the process is huge. But i run the same application in linux, after freeing, i found that the heap size of the process is equal to the size of the heap memory before allocation of 1.5GB. I know Solaris does not frees memory immediately, but i don't know how to tune the solaris kernel to immediately free the memory after free() system call. Also, please explain why the same problem does not comes under Linux? Can anyone help me out of this? Thanks, Santhosh.

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • Does IE completely ignore cache control headers for AJAX requests?

    - by Joshua Hayworth
    Hello there, I've got, what I would consider, a simple test web site. A single page with a single button. Here is a copy of the source I'm working with if you would like to download it and play with it. When that button is clicked, it creates a JavaScript timer that executes once a second. When the timer function is executed, An AJAX call is made to retrieve a text value. That text value is then placed into the DOM. What's my problem? IE Caching. Crack open Task Manager and watch what happens to the iexplorer.exe process (IE 8.0.7600.16385 for me) while the timer in that page is executing. See the memory and handle count getting larger? Why is that happening when, by all accounts, I have caching turned off. I've got the jQuery cache option set to false in $.ajaxSetup. I've got the CacheControl header set to no-cache and no-store. The Expires header is set to DateTime.Now.AddDays(-1). The headers are set in both the page code-behind as well as the HTTP Handler's response. Anybody got any ideas as to how I could prevent IE from caching the results of the AJAX call? Here is what the iexplorer.exe process looks like in ProcessMonitor. I believe that the activity shown in this picture is exactly what I'm attempting to prevent.

    Read the article

  • How to animate button text? (loading type of animation) - jquery?

    - by AL
    I got something that I want to do and want to see what you guys think and how can it be implemented. I got a form in a page and the form will submit to itself to perform some process (run functions) in a class. Upon clicking the submit button, I want to animate the button text to “SUBMIT .” -> “SUBMIT ..” -> “SUBMIT …” -> “SUBMIT ….” -> “SUBMIT ….” and then back again. Sort of “animate” the button text. Once the whole process is done, the button text will goes back to be “SUBMIT” text again. Please note that I am not looking for image button solution, meaning that I do not want to implement gif animated button image. Anyone done this before or know how this can be done? I have google but seems nothing of this kind to be found. Thanks! AL

    Read the article

  • servlet connection to DB

    - by underW
    Initially, after reading books on the subject, I firmly believed that the algorithm for working with a database from a servlet is as follows: create a connection - connect to the database - form a request - send the request to the database - get the query results - process them - close connection - OK. Now, with a better understanding of the practical side, I realized that nobody does it that way, and everything happens through a connection pool according to the following algorithm: initialize the servlet - create a connection pool - a request comes from a user - take a free connection from the pool - form a request - send the request to the database - get the query results - process them - return the connection back to the pool - ok. Now I have this problem: We have 100 users, they are divided into 10 groups, each group has it's own username and password to connect to the database. Moreover, each group may have different rights to the database. How am I supposed to use a connection pool in this situation? If I understand correctly, a pool is nothing more than just a group of similar connections with a single login and password. And here I have 10 pairs of username / password. It looks like I cannot use the pool in this situation. What should I do?

    Read the article

  • Problem deleting .svn directories on Windows XP

    - by John L
    I don't seem to have this problem on my home laptop with Windows XP, but then I don't do much work there. On my work laptop, with Windows XP, I have a problem deleting directories when it has directories that contain .svn directories. When it does eventually work, I have the same issue emptying the Recycle bin. The pop-up window says "Cannot remove folder text-base: The directory is not empty" or prop-base or other folder under .svn This continued to happen after I changed config of TortoiseSVN to stop the TSVN cache process from running and after a reboot of the system. Multiple tries will eventually get it done. But it is a huge annoyance because there are other issues I'm trying to fix, so I'm hoping it is related. 'Connected Backup PC' also runs on the laptop and the real problem is that cygwin commands don't always work. So I keep thinking the dot files and dot directories have something to do with both problems and/or the backup or other process scanning the directories is doing it. But I've run out of ideas of what to try or how to identify the problem further.

    Read the article

  • Unexpected output using subprocess in Python

    - by Vic
    I am trying to run a shell command from within my Python (version 2.6.5) code, but it is generating different output than the same command run within the shell (bash): bash: ~> ifconfig eth0 | sed -rn 's/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//' 192.168.1.10 Python: >>> def get_ip(): ... cmd_string = "ifconfig eth0 | sed -rn \'s/inet addr:(([0-9]{1,3}\.){3}[0-9]{1,3}).*/\1/p' | sed 's/^[ \t]*//;s/[ \t]*$//\'" ... process = subprocess.Popen(cmd_string, shell=True, stdout=subprocess.PIPE) ... out, err = process.communicate() ... return out ... >>> get_ip() '\x01\n' My guess is that I need to escape the quotes somehow when running in python, but I am not sure how to go about this. NOTE: I cannot install additional modules or update python on the machine that this code needs to be run on. It needs to work as-is with Python 2.6.5 and the standard library.

    Read the article

  • Need a design approach or suggestion for a simple structure using Servlet.

    - by akshay
    Hi I have to design such that whenever user pass a query I process it using servlet and then call the js page to draw the chart 1 user writes a query on a page 2 the page call the servelt class public class MyServlet extends Httpservlet implements DataSourceServlet {..... return data The user see a beautiful string like this.. google.visualization.Query.setResponse......... /Tiger'},{v:80.0}, {v:false}]}]}}); 3 when the user hits on different html page myhtml.js it draws the chart. I want the Myservlet class itself call the myhtml.js page and draw the chart directly. and want to eliminate the beautiful string google.visualization.Query.setResponse......... /Tiger'},{v:80.0}, {v:false}]}]}}); from coming on user's browser What should i do? I tried using functions to call another page like request dispatcher(), redirect() calling myhtml.js page directly after myservlet process the query results. But i get the result like this google.visualization.Query.setResponse......... /Tiger'},{v:80.0}, {v:false}]}]}}); and the entire myhtml.js code page below it on the browsers that to without the chart been draw. Is there anyway to element the beautiful string from coming on clients browser and only show them the chart been drawn ? :) This is the small tutorial i am following http://code.google.com/apis/visualization/documentation/dev/dsl_get_started.html

    Read the article

  • Should I *always* import my file references into the database in drupal?

    - by sprugman
    I have a cck type with an image field, and a unique_id text field. The file name of the image is based on the unique_id. All of the content, including the image itself is being generated automatically via another process, and I'm parsing what that generates into nodes. Rather than creating separate fields for the id and the image, and doing an official import of the image into the files table, I'm tempted to only create the id field and create the file reference in the theme layer. I can think of pros and cons: 1) Theme Layer Approach Pros: makes the import process much less complex don't have to worry about syncing the db with the file system as things change more flexible -- I can move my images around more easily if I want Cons: maybe not as much The Drupal Way™ not as pure -- I'll wind up with more logic on the theme side. 2) Import Approach Pros: import method is required if we ever wanted to make the files private (we won't.) safer? Maybe I'll know if there's a problem with the image at import time, rather than view time. Since I'll be bulk importing, that might make a difference. if I delete a node through the admin interface, drupal might be able to delete the file for me, as well. Con: more complex import and maintenance All else being equal, simpler is always better, so I'm leaning toward #1. Are there any other issues I'm missing? (Since this is an open ended question, I guess I'll make it a community wiki, whatever that means.)

    Read the article

  • Communicating with a running python daemon

    - by hanksims
    I wrote a small Python application that runs as a daemon. It utilizes threading and queues. I'm looking for general approaches to altering this application so that I can communicate with it while it's running. Mostly I'd like to be able to monitor its health. In a nutshell, I'd like to be able to do something like this: python application.py start # launches the daemon Later, I'd like to be able to come along and do something like: python application.py check_queue_size # return info from the daemonized process To be clear, I don't have any problem implementing the Django-inspired syntax. What I don't have any idea how to do is to send signals to the daemonized process (start), or how to write the daemon to handle and respond to such signals. Like I said above, I'm looking for general approaches. The only one I can see right now is telling the daemon constantly log everything that might be needed to a file, but I hope there's a less messy way to go about it. UPDATE: Wow, a lot of great answers. Thanks so much. I think I'll look at both Pyro and the web.py/Werkzeug approaches, since Twisted is a little more than I want to bite off at this point. The next conceptual challenge, I suppose, is how to go about talking to my worker threads without hanging them up. Thanks again.

    Read the article

  • Custom API requirement

    - by Jonathan.Peppers
    We are currently working on an API for an existing system. It basically wraps some web-requests as an easy-to-use library that 3rd party companies should be able to use with our product. As part of the API, there is an event mechanism where the server can call back to the client via a constantly-running socket connection. To minimize load on the server, we want to only have one connection per computer. Currently there is a socket open per process, and that could eventually cause load problems if you had multiple applications using the API. So my question is: if we want to deploy our API as a single standalone assembly, what is the best way to fix our problem? A couple options we thought of: Write an out of process COM object (don't know if that works in .Net) Include a second exe file that would be required for events, it would have to single-instance itself, and open a named pipe or something to communicate through multiple processes Extract this exe file from an embedded resource and execute it None of those really seem ideal. Any better ideas?

    Read the article

< Previous Page | 319 320 321 322 323 324 325 326 327 328 329 330  | Next Page >