Search Results

Search found 29203 results on 1169 pages for 'state machine workflow'.

Page 397/1169 | < Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >

  • Make C# application Group Policy aware

    - by Stefan Koell
    I want to make my app GPO aware. I know that it's basically just reading from a specific registry path but I still have some questions: How do I detect GPO refreshes? There's RegisterGPNotification here: http://msdn.microsoft.com/en-us/library/aa374404(VS.85).aspx but is there anything ready baked for C# out there or at Microsoft? What's considered best practice: is the machine policy stronger than the user policy or is the user policy overruling the machine policy? Anyone, who wants to share some experience in that area? Thanks, Stefan

    Read the article

  • Statically Compiled Oracle Client Drivers/Code

    - by blockcipher
    Hello, I'm looking to write a command-line program that can execute database scripts against an Oracle server, however the machine the program will be run on may not have an Oracle client installed on it. I also don't want to rely on a language that requires a VM as there's no guarantee that the VM will be installed, so a language like C is preferable for this. Is there a way that I can statically compile/build this program and not have to have the user install the Oracle client on that machine? I'm trying to be as unobtrusive as possible. Thanks.

    Read the article

  • .Net OutOfMemory on Server but not Desktop

    - by Jörg Battermann
    Is it possible that the .Net framework behaves differently when it comes to garbage collection / memory limitations on server environments? I am running explicitly x86 compiled apps on a 64bit server machine with 32gbs of physical ram and I am running out of memory (SystemOutOfMemoryException) even though nothing but that particular app is running and the server/all other apps utilize 520mb total.. but I cannot reproduce that behaviour on my own (client win7) machine. Now I know that the app -is- memory intensive, but why is it causing problems on the server and not on the client?

    Read the article

  • Hudson trigger builds remotely gives a forbidden 403 error

    - by Ritesh M Nayak
    I have a shell script on the same machine that hudson is deployed on and upon executing it, it calls wget on a hudson build trigger URL. Since its the same machine, I access it as http://localhost:8080/hudson/job/jobname/build?token=sometoken Typically, this is supposed to trigger a build on the project. But I get a 403 forbidden when I do this. Anybody has any idea why? I have tried this using a browser and it triggers the build, but via the command line it doesn't seem to work. Any ideas?

    Read the article

  • Git: how do you merge with remote repo?

    - by Marco
    Please help me understand how git works. I clone my remote repository on two different machines. I edit the same file on both machines. I successfully commit and push the update from the first machine to the remote repository. I then try to push the update on the second machine, but get an error: ! [rejected] master -> master (non-fast-forward) I understand why I received the error. How can I merge my changes into the remote repo? Do I need to pull the remote repo first?

    Read the article

  • ASP.Net: Scheduled tasks in a shared environment.

    - by UpTheCreek
    This is really an extension of this question, which asked the best way to schedule tasks which need to be performed periodically within ASP.NET in a normal environment. However, I would like to ask this specifically for a Shared Hosting Environment (most of the answers to the previous question would not work in a shared environment as far as I know). One obvious solution would be to have a page which is responsible to performing the tasks on the hosted machine, and call this page from another machine (that you have full control over) using e.g. a windows scheduled task. This is a bit nasty though - are there any better approaches?

    Read the article

  • How is "clean" testing done on the Macintosh without virtualization?

    - by Schnapple
    One of the things I've run across on Windows is when a web browser plugin or program you're developing makes an assumption that something is installed that, by default, isn't always present on Windows. A perfect example would be .NET - a whole lot of people running Windows XP have never installed any versions of .NET and so the installer needs to detect and remedy this if necessary. The way I've been testing this in Windows is to have a virtual machine with a snapshot of a clean, patched, but otherwise untouched install of XP or Vista or 7 or whatever. When I'm done testing I just discard any changes since the snapshot. Works great. I'm now developing something for the Macintosh, a platform which is very new to me, and I'm seeing that virtualization does not appear to be an option. It's explicitly forbidden in the EULA of Mac OS X, it's only allowed from Mac OS X Server, which seeing as how I'm targeting an end product is of no use to me, and the one program I see which can virtualize it - VirtualBox - only supports the server and actively nukes any discussion of running the consumer/client version of Mac OS X. And the only instructions I find anywhere on the topic seem to involve the use of "hacking" programs which is very much incompatible with the full-time gig I'm trying to do this for. So it looks like virtualization is out, but at various points I'm going to want or need to simulate what it's like to install and run this software on a "clean" Macintosh. How do people usually do this? Just buy multiple Macintoshes and use Time Machine? Am I thinking about this all wrong and everything Just Works? To be clear I'm not trying to run Mac OS X on a Windows machine. I have a Macintosh, I'm fine with virtualizing Mac OS X on Apple hardware, I'm just not seeing a route to making the non-Server version do this. I'm aware that Mac OS X Server can be virtualized but that's not what I'm going for. I'm aware that there are unsanctioned/unsupported methods of making Mac OS X run in virtualization programs like VirtualBox but for legal reasons I am not interested in those. My question is not "how can I do this?" but rather "so this thing I do on Windows seems to not be possible, generally, on the Macintosh, so what do people do to achieve what I'm going for?"

    Read the article

  • Possible to distribute an MPI (C++) program accross the internet rather than within a LAN cluster?

    - by Ben
    Hi there, I've written some MPI code which works flawlessly on large clusters. Each node in the cluster has the same cpu architecture and has access to a networked (i.e. 'common') file system (so that each node can excecute the actual binary). But consider this scenario: I have a machine in my office with a dual core processor (intel). I have a machine at home with a dual core processor (amd). Both machines run linux, and both machines can successfully compile and run the MPI code locally (i.e. using 2 cores). Now, is it possible to link the two machines together via MPI, so that I can utilise all 4 cores, bearing in mind the different architectures, and bearing in mind the fact that there are no shared (networked) filesystems? If so, how? Thanks, Ben.

    Read the article

  • Need better Java application deployment strategy

    - by Jeff Post
    I have various Java Swing applications that are used by multiple users. My deployment strategy is to locate the .jar file on a network share, and users create shortcuts to that file. When the user launches an application, the file is copied to their machine and executed locally. This method allows for a single copy of the code and easy updating. The problem is that I can't update the file on the network share if any user is using the application at that time. I can't use Web Start because I don't have access to a cert for signing the jar. My current workaround is a separate application that copies the desired app to the user's local machine, launches the application, and then the launcher exits. There is a several second delay from when the launcher app exits and the user's app becomes visible. Can anyone suggest a better deployment method where I can easily update a central copy of the application, one where Windows XP won't maintain a lock on the file?

    Read the article

  • How to remove flash cache on a web site periodically

    - by user320166
    I'm using a flash rotating banner in my website which takes images and descriptions from an XML file. I do changes to my XML very often... but in my local machine, the banner takes a day or two to get updated. Although I can clear my local machine's cache, the problem still remains for other users who visit my web page.. is there a programmatic way in flash or in html to overcome this problem ? Maybe a server configuration? Please help me with this.. PS: below code works fine, but it clears out the cache completely... i need to clear XMl cache after a specific time period.. please help. var timestamp:Date = new Date(); xmlData.load("/flash/images.xml?cachebuster=" + timestamp.getTime());

    Read the article

  • Custom script in .screenrc

    - by benoror
    Hi. I made a script that spawns a remote shell or runs a local shell whether it's on the current machine or not: #!/bin/bash # By: benoror <[email protected]> # # spawns a remote shell or runs a local shell whether it's on the current machine or not # $1 = hostname if [ "$(hostname)" == "$1" ]; then bash else ssh "$1.local" fi For example, if I'm on server1: ./spawnshell.sh server1 -> runs bash ./spawnshell.sh server2 -> ssh to server2.local I want that script to run automatically in separate tabs in GNU Screen, but I can't make it run, my .screenrc: ... screen -t "@server1" 1 exec /home/benoror/scripts/spawnshell.sh server1 screen -t "@server2" 2 exec /home/benoror/scripts/spawnshell.sh server2 ... But it doesn't works, I've tried without 'exec', with -X option and a lot more. Any ideas ?

    Read the article

  • Merging MySQL Structure and Data

    - by Shahid
    I have a MySQL database running on a deployment machine which also contains data. Then I have another MySQL database which has evolved in terms of STRUCTURE + DATA for some time. I need a way to merge the changes (ONLY) for both structure and data to the DB in deployment machine without disturbing the existing data. Does anyone know of a tool available which can do this safely. I have had a look at a few comparison tools but I need a tool which can automate the merge operation. Note also that most of the data in the tables is in BINARY so I can't use many file comparison tools. Does any one know of a solution to this? thanks

    Read the article

  • Multi-process builds in Visual Studio 2010: Worth it?

    - by coryr
    I've started testing our C++ software with VS2010 and the build times are really bad (30-45 minutes, about double the VS2005 times). I've been reading about the /MP switch for multi-process compilation. Unfortunately, it is incompatible with some features that we use quite a bit like #import, incremental compilation, and precompiled headers. Have you had a similar project where you tried the /MP switch after turning off things like precompiled headers? Did you get faster builds? My machine is running 64-bit Windows 7 on a 4 core machine with 4 GB of RAM and a fast SSD storage. Virus scanner disabled and a pretty minimal software environment.

    Read the article

  • ZF-Autoloader not working in UnitTests on Ubuntu

    - by Sam
    i got a problem regarding Unit-testing a Zend-Framework application under Ubuntu 12.04. The project-structure is a default zend application whereas the models are defined as the following ./application ./models ./DbTable ./ProjectStatus.php (Application_Model_DbTable_ProjectStatus) ./Mappers ./ProjectStatus.php (Application_Model_Mapper_ProjectStatus) ./ProjectStatus.php (Application_Model_ProjectStatus) The Problem here is with the Zend-specific autoloading. The naming convention here appears that the folder Mappers loads all classes with _Mapper but not _Mappers. This is some internal Zend behavior which is fine so far. On my windows machine the phpunit runs without any Problems, trying to initiate all those classes. On my Ubuntu machine however with jenkins running on it, phpunit fails to find the appropriate classes giving me the following error Fatal error: Class 'Application_Model_Mapper_ProjectStatus' not found in /var/lib/jenkins/jobs/PAM/workspace/tests/application/models/Mapper/ProjectStatusTest.php on line 39 The error appears to really be that the Zend-Autoloader doesn't load from the ubuntu machine, but i can't figure out how or why this works. The question remains of why this is. I think i've double checked every point of contact with the zend autoloading stuff, but i just can't figure this out. I'll paste the - from my point of view relevant snippets - and hope someone of you has any insight to this. Jenkins Snippet for PHPUnit <target name="phpunit" description="Run unit tests with PHPUnit"> <exec executable="phpunit" failonerror="true"> <arg line="--configuration '${basedir}/tests/phpunit.xml' --coverage-clover '${basedir}/build/logs/clover.xml' --coverage-html '${basedir}/build/coverage/.' --log-junit '${basedir}/build/logs/junit.xml'" /> </exec> </target> ./tests/phpunit.xml <phpunit bootstrap="./bootstrap.php"> ... this shouldn't be of relevance ... </phpunit> ./tests/bootstrap.php <?php // Define path to application directory defined('APPLICATION_PATH') || define('APPLICATION_PATH', realpath(dirname(__FILE__) . '/../application')); // Define application environment defined('APPLICATION_ENV') || define('APPLICATION_ENV', (getenv('APPLICATION_ENV') ? getenv('APPLICATION_ENV') : 'testing')); // Ensure library/ is on include_path set_include_path(implode(PATH_SEPARATOR, array( realpath(APPLICATION_PATH . '/../library'), get_include_path(), ))); require_once 'Zend/Loader/Autoloader.php'; Zend_Loader_Autoloader::getInstance(); Any help will be appreciated.

    Read the article

  • Sub reports find the sub total and grand total of each sub report in the main report

    - by sonia
    i want to find the grand total from sub report subtotal. i have three subreports. 1. itemreport 2.laborreport 3. machine report. i have find total of that reports using shared variable. like using formula: shared numbervar totalitem=sum({storedprocedrename.columnname}) i have done dis in all the sbreports. now i m want the grand total of all these. i have written the formula in main report is: shared numbervar totalitem; // same variable used in subreport item shared numbervar labtotal; // same variable sed in subreport labor shared numbervar machinetotal; // same variable used in subreport machine numbervar total; total=totalitem+labtotal+machinetotal; total but it is not giving correct result.. it is not giving result in correct format plz tell me code of main report in detail.. thanks

    Read the article

  • window.onbeforeunload ajax request problem with Chrome

    - by lang2
    Hello, I have a web page that handles remote control of a machine through Ajax. When user navigate away from the page, I'd like to automatically disconnect from the machine. So here is the code: window.onbeforeunload = function () { bas_disconnect_only(); } The disconnection function simply send a HTTP GET request to a PHP server side script, which does the actual work of disconnecting: function bas_disconnect_only() { var xhr = bas_send_request("req=10", function() { } ); } This works fine in FireFox. But with Chrome, the ajax request is not sent at all. There is a unacceptable workaround: adding alert to the callback function: function bas_disconnect_only() { var xhr = bas_send_request("req=10", function() { alert("You're been automatically disconnected."); } ); } After adding the alert call, the request would be sent successfully. But as you can see, it's not really a work around at all. Could somebody tell me if this is achievable with Chrome? What I'm doing looks completely legit to me. Thanks,

    Read the article

  • Read from POS Devices

    - by Garhwali Bhai
    Hi Experts, What would be the best design for a Java web based application to read Check or Credit Card from POS Devices which are configured/installed at client machine? The design which I have decided is: When cashier selects payment mode as 'Check' or 'Credit Card' open a pop-up. From pop-up, I am calling an Applet through JNLP which implements JPOS API to communicate with POS devices at client machine. When cashier swipes or inserts check, Applet class read them and send the information back to the pop-up window. Pop up window call the parent window, populate all the required field and close the pop-up.

    Read the article

  • Visual Studio Custom Project Template

    - by rauts
    Hi All, I have created a Custom C# Project Template for Visual Studio 2008. It works perfect. Only issue is that i have to place the zip file for the project template under the "C:\Documents and Settings\\My Documents\Visual Studio 2008\Templates\ItemTemplates\Visual C#" Now as this folder is specific to each user on the machine, I will have to make sure that all the users on the machine has the project template installed seperately. Is there any way I can just install it once and all the users can get this project template. In short can I change the Custom Project template Install directory?

    Read the article

  • Problem with Authentication in sharepoint using active directory

    - by user549708
    I am currently using windows server 2008 machine. I have active directory and sharepoint setup on the machine. I have a user 'A' in the active directory and i have given the user read permissions to my site. The problem i now face is that, if i log in as user 'A' the site simply shows "error:access denied". This problem goes away if i put 'A' as a site collection administrator, however that is not what i want. I just want 'A' to be a visitor that can browse the site. I also tried granting 'read' permission to my site for 'A' but that still gives me the access denied message. Thank you for your time.

    Read the article

  • Web services Authentication Jungle

    - by redben
    I have been doing some research lately about best approaches to authenticating web services calls (REST SOAP or whatever). But none of the Approaches convinced me... But i still can't a make a choise... Some talk about SSL and http basic authentication -login/password- which just seems weird for a machine (i mean having to assign a login/password to a machine, or is it not ?). Some others say API keys (seems like these scheme is more used for tracking and not realy for securing). Some say tokens (like session IDs) but shouldn't we stay stateless (especially if in REST style) ? In my use case, when a remote app is calling one of our web services, i have to authenticate the calling application obviously, and the call must - if applicable - tell me which user it impersonates so i can deal with authorization later. Any thoughts ?

    Read the article

  • (Pathinfo vs fnmatch part 2) Speed benchmark reversed on Windows and Mac

    - by zaf
    On a previous question the pathinfo and fnmatch functions were benchmarked and the answers all came out opposite to my benchmark results. You can read the different results with the benchmark code here: http://stackoverflow.com/questions/2693428/pathinfo-vs-fnmatch I couldn't work it out until I ran the same code on a machine running vista. The results then matched the other users. My main machine is a mac. So, my questions are: Why do we get these two different results? Could this apply to other functions?

    Read the article

  • java web application best practices

    - by Bruce
    Hi all I'm trying to figure out the optimum way to develop and release a fairly simple web application, and I'm running into several problems. I'll outline the decisions I've made, because somewhere I've clearly gone off the rails.. Hugely grateful for any help! I have what I think is a fairly simple web application. It contains a couple of jsps that reference a couple of java beans, and the usual static html, js, css and images. Decision 1) I wanted to have a clear and clean release procedure, such that I could develop on my local machine and then release reliably to a production machine. I therefore made the decision to package the application into a war file (including all the static resources), to minimize the separate bits and pieces I would need to release. So far so good? Decision 2) I wanted things on my local machine to be as similar as possible to the production environment. So in my html, for example, I may have a reference to a static file such as http://static.foo.com/file . To keep this code working seamlessly on dev and prod, I decided to put static.foo.com in my /etc/hosts when developing locally, so that all the urls work correctly without changing anything. Decision 3) I decided to use eclipse and maven to give me a best practice environment for administering and building my project. So I have a nice tight set up now, except that: Every time I want to change anything in development, like one line in an html file, I have to rebuild the entire project and then wait for tomcat to load the war before I can see if it's what I wanted. So my questions are: 1) Is there a way to connect up eclipse and tomcat so that I don't have to rebuild the war each time? ie tomcat is looking straight at my actual workspace to serve up the static files? 2)I think I'm maybe making things harder by using /etc/hosts to reflect production urls - is there a better way that doesn't involve manually changing over urls (relative urls are fine of course, but where you have many subdomains, say one for static files and one for dynamic, you have to write out the full path, surely?) 3) Is this really best practice?? How do people set things up so that they balance the requirement for an automated, all-encompassing build process on the one hand, and the speed and flexibility to be able to develop javascript and html and css quickly, as quickly as if one just pointed apache at the directory and developed live? What do people find works? Many thanks!

    Read the article

  • Running GUI application in the Windows service mode

    - by Leonid
    I'm writing a server running as a Windows service that by request invokes Firefox to generate a pdf snapshot of a webpage. I know it is a bad idea to run a GUI program in service mode, but the server nature of my program restricts from running it in the user mode. Running a user-level 'proxy' also is not an option, since there might be no interactive user logged-in on the machine with the server running. In my experiments Firefox successfully produced pdf when the service was running under a user account that was already logged-in. Obviously it didn't work in other cases: for Local System and user accounts that weren't logged-in. Under LocalSystem with 'Allow service to interact with desktop' option enabled I could see the Firefox started that reports that it's unable to find a printer. Since it wouldn't be practical to require an opened user session for the pdf server to run, is there any workaround for this except running the whole thing from a virtual machine?

    Read the article

  • Writing a simple batch file to setup a variable?

    - by Sam
    I want to write a simple batch file where i want to setup a environment variable based on the machine architecture. It is as below: set ARCH=%PROCESSOR_ARCHITECTURE% echo %ARCH% if %ARCH%==x86 ( set JAVA_ROOT=C:\Progra~1\Java\j2re1.4.2_13 ) else ( set JAVA_ROOT=C:\Progra~2\Java\j2re1.4.2_13 ) echo JAVA_ROOT is %JAVA_ROOT% On 64-bit machine where the architecture is 'AMD64' the JAVA_ROOT will be displayed as 'C:\Progra~2\Java\j2re1.4.2_13' at the echo statement. But when i run an application that uses this file, the first value of JAVA_ROOT would be picked up 'C:\Progra~1\Java\j2re1.4.2_13'. I don't have any idea why it goes in the 'if' part even though i am running this on 64-bit Windows7. When i echoed the

    Read the article

< Previous Page | 393 394 395 396 397 398 399 400 401 402 403 404  | Next Page >