Search Results

Search found 32919 results on 1317 pages for 'program management'.

Page 319/1317 | < Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >

  • XP Deploying issues due to msvcr90.dll trying to load FlsAlloc

    - by Sorin Sbarnea
    I have an application build with VS2008 SP1a (9.0.30729.4148) on Windows 7 x64 that does not want to start under XP. The message is The application failed to initialize properly (0x80000003). Click on OK to terminate the application.. I checked with depends.exe and found that msvcr90.dll does try to load FlsAlloc from KERNEL32.dll - and FlsAlloc is available only starting with Vista. I'm sure it is not used by the application. How to solve the issue? The SxS package is already installed on the target machine - In fact I have all 3 versions of 9.0 SxS (initial release, sp1, and sp1+security patch) Application is compiled with _BIND_TO_CURRENT_VCLIBS_VERSION=1 Also I defined the right target Windows version on stdafx.h #define WINVER 0x0500 #define _WIN32_WINNT 0x0500 Manifest file <assembly xmlns="urn:schemas-microsoft-com:asm.v1" manifestVersion="1.0"> <trustInfo xmlns="urn:schemas-microsoft-com:asm.v3"> <security> <requestedPrivileges> <requestedExecutionLevel level="asInvoker" uiAccess="false" /> </requestedPrivileges> </security> </trustInfo> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.CRT" version="9.0.30729.4148" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> <dependency> <dependentAssembly> <assemblyIdentity type="win32" name="Microsoft.VC90.MFC" version="9.0.30729.4148" processorArchitecture="x86" publicKeyToken="1fc8b3b9a1e18e3b" /> </dependentAssembly> </dependency> </assembly> Result from depends Started "c:\program files\app\app.EXE" (process 0xA0) at address 0x00400000. Successfully hooked module. Loaded "c:\windows\system32\NTDLL.DLL" at address 0x7C900000. Successfully hooked module. Loaded "c:\windows\system32\KERNEL32.DLL" at address 0x7C800000. Successfully hooked module. Loaded "c:\program files\app\MFC90.DLL" at address 0x785E0000. Successfully hooked module. Loaded "c:\program files\app\MSVCR90.DLL" at address 0x78520000. Successfully hooked module. Loaded "c:\windows\system32\USER32.DLL" at address 0x7E410000. Successfully hooked module. Loaded "c:\windows\system32\GDI32.DLL" at address 0x77F10000. Successfully hooked module. Loaded "c:\windows\system32\SHLWAPI.DLL" at address 0x77F60000. Successfully hooked module. Loaded "c:\windows\system32\ADVAPI32.DLL" at address 0x77DD0000. Successfully hooked module. Loaded "c:\windows\system32\RPCRT4.DLL" at address 0x77E70000. Successfully hooked module. Loaded "c:\windows\system32\SECUR32.DLL" at address 0x77FE0000. Successfully hooked module. Loaded "c:\windows\system32\MSVCRT.DLL" at address 0x77C10000. Successfully hooked module. Loaded "c:\windows\system32\COMCTL32.DLL" at address 0x5D090000. Successfully hooked module. Loaded "c:\windows\system32\MSIMG32.DLL" at address 0x76380000. Successfully hooked module. Loaded "c:\windows\system32\SHELL32.DLL" at address 0x7C9C0000. Successfully hooked module. Loaded "c:\windows\system32\OLEAUT32.DLL" at address 0x77120000. Successfully hooked module. Loaded "c:\windows\system32\OLE32.DLL" at address 0x774E0000. Successfully hooked module. Entrypoint reached. All implicit modules have been loaded. DllMain(0x78520000, DLL_PROCESS_ATTACH, 0x0012FD30) in "c:\program files\app\MSVCR90.DLL" called. GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsAlloc") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543ACC and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsGetValue") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AD9 and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsSetValue") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AE6 and returned NULL. Error: The specified procedure could not be found (127). GetProcAddress(0x7C800000 [c:\windows\system32\KERNEL32.DLL], "FlsFree") called from "c:\program files\app\MSVCR90.DLL" at address 0x78543AF3 and returned NULL. Error: The specified procedure could not be found (127).

    Read the article

  • Google App Engine with Java - Error running javac.exe compiler

    - by dta
    On Windows XP Just downloaed and unzipped google app engine java sdk to C:\Program Files\appengine-java-sdk I have jdk installed in C:\Program Files\Java\jdk1.6.0_20. I ran the sample application by appengine-java-sdk\bin\dev_appserver.cmd appengine-java-sdk\demos\guestbook\war Then I visited localhost:8080 to find : HTTP ERROR 500 Problem accessing /. Reason: Error running javac.exe compiler Caused by: Error running javac.exe compiler at org.apache.tools.ant.taskdefs.compilers.DefaultCompilerAdapter.executeExternalCompile(DefaultCompilerAdapter.java:473) How to Fix it? My JAVA_HOME points to C:\Program Files\Java\jdk1.6.0_20. I also tried chaning my appcfg.cmd to : @"C:\Program Files\Java\jdk1.6.0_20\bin\java" -cp "%~dp0..\lib\appengine-tools-api.jar" com.google.appengine.tools.admin.AppCfg %* It too didn't work.

    Read the article

  • avi to mpeg4 command line convertor

    - by Samvel Siradeghyan
    Hi all I am writting program for recording IP cameras videos. I use Aforge framework and can save video in avi format, but it's size is too big. I need some command line program to convert videos from avi to mpeg4 format. Is there any free program and if yes where can I download them and how to use it. Thanks.

    Read the article

  • C# - Shortest path map finding

    - by nXqd
    I try to write a simple program in C#, it's like map finding . I've a picture of city / or district ( it's const ) and I'll add a database to this program to store variables, points . I use floyd to find the shortest path and I'll draw the path in the image ( by coordinates I think ) . This is the first time I write a real program in C# so how should I implement this one ;) Thanks so much for reading !

    Read the article

  • Python: how do I install SciPy on 64 bit Windows?

    - by Peter Mortensen
    How do I install SciPy on my system? Update 1: for the NumPy part (that SciPy depends on) there is actually an installer for 64 bit Windows: numpy-1.3.0.win-amd64-py2.6.msi (is direct download URL, 2310144 bytes). Running the SciPy superpack installer results in this message in a dialog box: "Cannot install. Python version 2.6 required, which was not found in the registry." I already have Python 2.6.2 installed (and a working Django installation in it), but I don't know about any Registry story. The registry entries seems to already exist: REGEDIT4 [HKEY_LOCAL_MACHINE\SOFTWARE\Python] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\Help] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\Help\Main Python Documentation] @="D:\\Python262\\Doc\\python262.chm" [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\InstallPath] @="D:\\Python262\\" [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\InstallPath\InstallGroup] @="Python 2.6" [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\Modules] [HKEY_LOCAL_MACHINE\SOFTWARE\Python\PythonCore\2.6\PythonPath] @="D:\\Python262\\Lib;D:\\Python262\\DLLs;D:\\Python262\\Lib\\lib-tk" What I have done so far: Step 1 Downloaded the NumPy superpack installer numpy-1.3.0rc2-win32-superpack-python2.6.exe (direct download URL, 4782592 bytes). Running this installer resulted in the same message, "Cannot install. Python version 2.6 required, which was not found in the registry.". Update: there is actually an installer for NumPy that works - see beginning of the question. Step 2 Tried to install NumPy in another way. Downloaded the zip package numpy-1.3.0rc2.zip (direct download URL, 2404011 bytes), extracted the zip file in a normal way to a temporary directory, D:\temp7\numpy-1.3.0rc2 (where setup.py and README.txt is). I then opened a command line window and: d: cd D:\temp7\numpy-1.3.0rc2 setup.py install This ran for a long time and also included use of cl.exe (part of Visual Studio). Here is a nearly 5000 lines long transcript (230 KB). This seemed to work. I can now do this in Python: import numpy as np np.random.random(10) with this result: array([ 0.35667511, 0.56099423, 0.38423629, 0.09733172, 0.81560421, 0.18813222, 0.10566666, 0.84968066, 0.79472597, 0.30997724]) Step 3 Downloaded the SciPy superpack installer, scipy-0.7.1rc3- win32-superpack-python2.6.exe (direct download URL, 45597175 bytes). Running this installer resulted in the message listed in the beginning Step 4 Tried to install SciPy in another way. Downloaded the zip package scipy-0.7.1rc3.zip (direct download URL, 5506562 bytes), extracted the zip file in a normal way to a temporary directory, D:\temp7\scipy-0.7.1 (where setup.py and README.txt is). I then opened a command line window and: d: cd D:\temp7\scipy-0.7.1 setup.py install This did not achieve much - here is a transcript (about 95 lines). And it fails: >>> import scipy as sp2 Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: No module named scipy Platform: Python 2.6.2 installed in directory D:\Python262, Windows XP 64 bit SP2, 8 GB RAM, Visual Studio 2008 Professional Edition installed. The startup screen of the installed Python is: Python 2.6.2 (r262:71605, Apr 14 2009, 22:46:50) [MSC v.1500 64 bit (AMD64)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> Value of PATH, result from SET in a command line window: Path=D:\Perl64\site\bin;D:\Perl64\bin;C:\Program Files (x86)\PC Connectivity Solution\;D:\Perl\site\bin;D:\Perl\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\Program Files (x86)\ATI Technologies\ATI.ACE\Core-Static;d:\Program Files (x86)\WinSCP\;D:\MassLynx\;D:\Program Files (x86)\Analyst\bin;d:\Python262;d:\Python262\Scripts;D:\Program Files (x86)\TortoiseSVN\bin;D:\Program Files\TortoiseSVN\bin;C:\WINDOWS\system32\WindowsPowerShell\v1.0;D:\Program Files (x86)\IDM Computer Solutions\UltraEdit\

    Read the article

  • ApplicationSettingsBase.Upgrade() Not Upgrading User Settings after Recompiling with .NET 4.0

    - by Mageuzi
    I have a C# program that is using the standard ApplicationSettingsBase to save its user settings. This was working fine under .NET 3.5. And the provided Upgrade() method would properly "reload" those settings whenever a new version of my program was created. Recently, I recompiled the program with .NET 4.0. My program's version number also increased. But, when I run this version, Upgrade() doesn't seem to to detect any previous version settings, and does not "reload" them. It starts blank. As a test, I recompiled yet again, going back to .NET 3.5. And this time, the Upgrade() method started working again. Is there a way to allow Upgrade() to work when switching frameworks? Is there something else I am missing? Thanks.

    Read the article

  • Revisiting ANTS Performance Profiler 7.4

    - by James Michael Hare
    Last year, I did a small review on the ANTS Performance Profiler 6.3, now that it’s a year later and a major version number higher, I thought I’d revisit the review and revise my last post. This post will take the same examples as the original post and update them to show what’s new in version 7.4 of the profiler. Background A performance profiler’s main job is to keep track of how much time is typically spent in each unit of code. This helps when we have a program that is not running at the performance we expect, and we want to know where the program is experiencing issues. There are many profilers out there of varying capabilities. Red Gate’s typically seem to be the very easy to “jump in” and get started with very little training required. So let’s dig into the Performance Profiler. I’ve constructed a very crude program with some obvious inefficiencies. It’s a simple program that generates random order numbers (or really could be any unique identifier), adds it to a list, sorts the list, then finds the max and min number in the list. Ignore the fact it’s very contrived and obviously inefficient, we just want to use it as an example to show off the tool: 1: // our test program 2: public static class Program 3: { 4: // the number of iterations to perform 5: private static int _iterations = 1000000; 6: 7: // The main method that controls it all 8: public static void Main() 9: { 10: var list = new List<string>(); 11: 12: for (int i = 0; i < _iterations; i++) 13: { 14: var x = GetNextId(); 15: 16: AddToList(list, x); 17: 18: var highLow = GetHighLow(list); 19: 20: if ((i % 1000) == 0) 21: { 22: Console.WriteLine("{0} - High: {1}, Low: {2}", i, highLow.Item1, highLow.Item2); 23: Console.Out.Flush(); 24: } 25: } 26: } 27: 28: // gets the next order id to process (random for us) 29: public static string GetNextId() 30: { 31: var random = new Random(); 32: var num = random.Next(1000000, 9999999); 33: return num.ToString(); 34: } 35: 36: // add it to our list - very inefficiently! 37: public static void AddToList(List<string> list, string item) 38: { 39: list.Add(item); 40: list.Sort(); 41: } 42: 43: // get high and low of order id range - very inefficiently! 44: public static Tuple<int,int> GetHighLow(List<string> list) 45: { 46: return Tuple.Create(list.Max(s => Convert.ToInt32(s)), list.Min(s => Convert.ToInt32(s))); 47: } 48: } So let’s run it through the profiler and see what happens! Visual Studio Integration First, let’s look at how the ANTS profilers integrate with Visual Studio’s menu system. Once you install the ANTS profilers, you will get an ANTS menu item with several options: Notice that you can either Profile Performance or Launch ANTS Performance Profiler. These sound similar but achieve two slightly different actions: Profile Performance: this immediately launches the profiler with all defaults selected to profile the active project in Visual Studio. Launch ANTS Performance Profiler: this launches the profiler much the same way as starting it from the Start Menu. The profiler will pre-populate the application and path information, but allow you to change the settings before beginning the profile run. So really, the main difference is that Profile Performance immediately begins profiling with the default selections, where Launch ANTS Performance Profiler allows you to change the defaults and attach to an already-running application. Let’s Fire it Up! So when you fire up ANTS either via Start Menu or Launch ANTS Performance Profiler menu in Visual Studio, you are presented with a very simple dialog to get you started: Notice you can choose from many different options for application type. You can profile executables, services, web applications, or just attach to a running process. In fact, in version 7.4 we see two new options added: ASP.NET Web Application (IIS Express) SharePoint web application (IIS) So this gives us an additional way to profile ASP.NET applications and the ability to profile SharePoint applications as well. You can also choose your level of detail in the Profiling Mode drop down. If you choose Line-Level and method-level timings detail, you will get a lot more detail on the method durations, but this will also slow down profiling somewhat. If you really need the profiler to be as unintrusive as possible, you can change it to Sample method-level timings. This is performing very light profiling, where basically the profiler collects timings of a method by examining the call-stack at given intervals. Which method you choose depends a lot on how much detail you need to find the issue and how sensitive your program issues are to timing. So for our example, let’s just go with the line and method timing detail. So, we check that all the options are correct (if you launch from VS2010, the executable and path are filled in already), and fire it up by clicking the [Start Profiling] button. Profiling the Application Once you start profiling the application, you will see a real-time graph of CPU usage that will indicate how much your application is using the CPU(s) on your system. During this time, you can select segments of the graph and bookmark them, giving them mnemonic names. This can be useful if you want to compare performance in one part of the run to another part of the run. Notice that once you select a block, it will give you the call tree breakdown for that selection only, and the relative performance of those calls. Once you feel you have collected enough information, you can click [Stop Profiling] to stop the application run and information collection and begin a more thorough analysis. Analyzing Method Timings So now that we’ve halted the run, we can look around the GUI and see what we can see. By default, the times are shown in terms of percentage of time of the total run of the application, though you can change it in the View menu item to milliseconds, ticks, or seconds as well. This won’t affect the percentages of methods, it only affects what units the times are shown. Notice also that the major hotspot seems to be in a method without source, ANTS Profiler will filter these out by default, but you can right-click on the line and remove the filter to see more detail. This proves especially handy when a bottleneck is due to a method in the BCL. So now that we’ve removed the filter, we see a bit more detail: In addition, ANTS Performance Profiler gives you the ability to decompile the methods without source so that you can dive even deeper, though typically this isn’t necessary for our purposes. When looking at timings, there are generally two types of timings for each method call: Time: This is the time spent ONLY in this method, not including calls this method makes to other methods. Time With Children: This is the total of time spent in both this method AND including calls this method makes to other methods. In other words, the Time tells you how much work is being done exclusively in this method, and the Time With Children tells you how much work is being done inclusively in this method and everything it calls. You can also choose to display the methods in a tree or in a grid. The tree view is the default and it shows the method calls arranged in terms of the tree representing all method calls and the parent method that called them, etc. This is useful for when you find a hot-spot method, you can see who is calling it to determine if the problem is the method itself, or if it is being called too many times. The grid method represents each method only once with its totals and is useful for quickly seeing what method is the trouble spot. In addition, you can choose to display Methods with source which are generally the methods you wrote (as opposed to native or BCL code), or Any Method which shows not only your methods, but also native calls, JIT overhead, synchronization waits, etc. So these are just two ways of viewing the same data, and you’re free to choose the organization that best suits what information you are after. Analyzing Method Source If we look at the timings above, we see that our AddToList() method (and in particular, it’s call to the List<T>.Sort() method in the BCL) is the hot-spot in this analysis. If ANTS sees a method that is consuming the most time, it will flag it as a hot-spot to help call out potential areas of concern. This doesn’t mean the other statistics aren’t meaningful, but that the hot-spot is most likely going to be your biggest bang-for-the-buck to concentrate on. So let’s select the AddToList() method, and see what it shows in the source window below: Notice the source breakout in the bottom pane when you select a method (from either tree or grid view). This shows you the timings in this method per line of code. This gives you a major indicator of where the trouble-spot in this method is. So in this case, we see that performing a Sort() on the List<T> after every Add() is killing our performance! Of course, this was a very contrived, duh moment, but you’d be surprised how many performance issues become duh moments. Note that this one line is taking up 86% of the execution time of this application! If we eliminate this bottleneck, we should see drastic improvement in the performance. So to fix this, if we still wanted to maintain the List<T> we’d have many options, including: delay Sort() until after all Add() methods, using a SortedSet, SortedList, or SortedDictionary depending on which is most appropriate, or forgoing the sorting all together and using a Dictionary. Rinse, Repeat! So let’s just change all instances of List<string> to SortedSet<string> and run this again through the profiler: Now we see the AddToList() method is no longer our hot-spot, but now the Max() and Min() calls are! This is good because we’ve eliminated one hot-spot and now we can try to correct this one as well. As before, we can then optimize this part of the code (possibly by taking advantage of the fact the list is now sorted and returning the first and last elements). We can then rinse and repeat this process until we have eliminated as many bottlenecks as possible. Calls by Web Request Another feature that was added recently is the ability to view .NET methods grouped by the HTTP requests that caused them to run. This can be helpful in determining which pages, web services, etc. are causing hot spots in your web applications. Summary If you like the other ANTS tools, you’ll like the ANTS Performance Profiler as well. It is extremely easy to use with very little product knowledge required to get up and running. There are profilers built into the higher product lines of Visual Studio, of course, which are also powerful and easy to use. But for quickly jumping in and finding hot spots rapidly, Red Gate’s Performance Profiler 7.4 is an excellent choice. Technorati Tags: Influencers,ANTS,Performance Profiler,Profiler

    Read the article

  • MS Access 2003: Can data disappear from records and how do I test for this and prevent it?

    - by user328960
    Problem and about the database: Data from a record in Access 2003 database has disappeared. This database has 1 backend and 3 frontends, multiple users and is hosted on Citrix. Within this database, we have records of all clients served, ranging in the 1000s. Background info: The form for client data entry is set up with various subforms, including both a "programs enrolled" subform and a "services" subform. A client can be enrolled in multiple programs. Once enrolled in a program, services can be entered for that program area using the services subform. There are multiple fields in the services subform, one of which is a drop-down field allowing you to choose from the programs a client has been enrolled in (the list is updated for that client whenever he is enrolled in a new program). The problem details: For one specific record and one specific program area, the program has disappeared from the "programs enrolled" subform and all of the related services have disappeared from the "services" subform for a period of 3 months of data entry. However, other programs and services for this record did not disappear. Questions: Is the disappearance of data a common Access 2003 problem? Are there tests in place that can be run to see if data is disappearing and catch that data? If so, what are they? If there is specific code involved, what is it? What can be done to prevent the disappearing of data (other than using a different database)?

    Read the article

  • Handle Arbitrary Exception, Print Default Exception Message

    - by inspectorG4dget
    I have a program, a part of which executes a loop. During the execution of this loop, there are exceptions. Obviously, I would like my program to run without errors, but for the sake of progress, I would like the program to execute over the entire input and not stop when an exception is thrown. The easiest way to do this would be by implementing and except block. However, when I do this, it excepts all exceptions and continues with the program and I never get to see the exception message (which I need in order to debug). Is there a way to except any arbitrary exception and be able to print out the exception message in the except block?

    Read the article

  • Reserve RAM in C

    - by petersmith221
    Hi I need ideas on how to write a C program that reserve a specified amount of MB RAM until a key [ex. the any key] is pressed on a Linux 2.6 32 bit system. * /.eat_ram.out 200 # If free -m is execute at this time, it should report 200 MB more in the used section, than before running the program. [Any key is pressed] # Now all the reserved RAM should be released and the program exits. * It is the core functionality of the program [reserving the RAM] i do not know how to do, getting arguments from the commandline, printing [Any key is pressed] and so on is not a problem from me. Any ideas on how to do this?

    Read the article

  • Reserve RAM in C

    - by petersmith221
    Hi I need ideas on how to write a C program that reserve a specified amount of MB RAM until a key [ex. the any key] is pressed on a Linux 2.6 32 bit system. * /.eat_ram.out 200 # If free -m is execute at this time, it should report 200 MB more in the used section, than before running the program. [Any key is pressed] # Now all the reserved RAM should be released and the program exits. * It is the core functionality of the program [reserving the RAM] i do not know how to do, getting arguments from the commandline, printing [Any key is pressed] and so on is not a problem from me. Any ideas on how to do this?

    Read the article

  • Brute force characters into a textbox in c#

    - by Fred Dunly
    Hey everyone, I am VERY new to programming and the only language I know is C# So I will have to stick with that... I want to make a program that "test passwords" to see how long they would take to break with a basic brute force attack. So what I did was make 2 text boxes. (textbox1 and textbox2) and wrote the program so if the text boxes had the input, a "correct password" label would appear, but i want to write the program so that textbox2 will run a brute force algorithm in it, and when it comes across the correct password, it will stop. I REALLY need help, and if you could just post my attached code with the correct additives in it that would be great. The program so far is extremely simple, but I am very new to this, so. Thanks in advance. private void textBox2_TextChanged(object sender, EventArgs e) { } private void button1_Click(object sender, EventArgs e) { if (textBox2.Text == textBox1.Text) { label1.Text = "Password Correct"; } else { label1.Text = "Password Wrong"; } } private void label1_Click(object sender, EventArgs e) { } } } `

    Read the article

  • The Incremental Architect&acute;s Napkin &ndash; #3 &ndash; Make Evolvability inevitable

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/04/the-incremental-architectacutes-napkin-ndash-3-ndash-make-evolvability-inevitable.aspxThe easier something to measure the more likely it will be produced. Deviations between what is and what should be can be readily detected. That´s what automated acceptance tests are for. That´s what sprint reviews in Scrum are for. It´s no small wonder our software looks like it looks. It has all the traits whose conformance with requirements can easily be measured. And it´s lacking traits which cannot easily be measured. Evolvability (or Changeability) is such a trait. If an operation is correct, if an operation if fast enough, that can be checked very easily. But whether Evolvability is high or low, that cannot be checked by taking a measure or two. Evolvability might correlate with certain traits, e.g. number of lines of code (LOC) per function or Cyclomatic Complexity or test coverage. But there is no threshold value signalling “evolvability too low”; also Evolvability is hardly tangible for the customer. Nevertheless Evolvability is of great importance - at least in the long run. You can get away without much of it for a short time. Eventually, though, it´s needed like any other requirement. Or even more. Because without Evolvability no other requirement can be implemented. Evolvability is the foundation on which all else is build. Such fundamental importance is in stark contrast with its immeasurability. To compensate this, Evolvability must be put at the very center of software development. It must become the hub around everything else revolves. Since we cannot measure Evolvability, though, we cannot start watching it more. Instead we need to establish practices to keep it high (enough) at all times. Chefs have known that for long. That´s why everybody in a restaurant kitchen is constantly seeing after cleanliness. Hygiene is important as is to have clean tools at standardized locations. Only then the health of the patrons can be guaranteed and production efficiency is constantly high. Still a kitchen´s level of cleanliness is easier to measure than software Evolvability. That´s why important practices like reviews, pair programming, or TDD are not enough, I guess. What we need to keep Evolvability in focus and high is… to continually evolve. Change must not be something to avoid but too embrace. To me that means the whole change cycle from requirement analysis to delivery needs to be gone through more often. Scrum´s sprints of 4, 2 even 1 week are too long. Kanban´s flow of user stories across is too unreliable; it takes as long as it takes. Instead we should fix the cycle time at 2 days max. I call that Spinning. No increment must take longer than from this morning until tomorrow evening to finish. Then it should be acceptance checked by the customer (or his/her representative, e.g. a Product Owner). For me there are several resasons for such a fixed and short cycle time for each increment: Clear expectations Absolute estimates (“This will take X days to complete.”) are near impossible in software development as explained previously. Too much unplanned research and engineering work lurk in every feature. And then pervasive interruptions of work by peers and management. However, the smaller the scope the better our absolute estimates become. That´s because we understand better what really are the requirements and what the solution should look like. But maybe more importantly the shorter the timespan the more we can control how we use our time. So much can happen over the course of a week and longer timespans. But if push comes to shove I can block out all distractions and interruptions for a day or possibly two. That´s why I believe we can give rough absolute estimates on 3 levels: Noon Tonight Tomorrow Think of a meeting with a Product Owner at 8:30 in the morning. If she asks you, how long it will take you to implement a user story or bug fix, you can say, “It´ll be fixed by noon.”, or you can say, “I can manage to implement it until tonight before I leave.”, or you can say, “You´ll get it by tomorrow night at latest.” Yes, I believe all else would be naive. If you´re not confident to get something done by tomorrow night (some 34h from now) you just cannot reliably commit to any timeframe. That means you should not promise anything, you should not even start working on the issue. So when estimating use these four categories: Noon, Tonight, Tomorrow, NoClue - with NoClue meaning the requirement needs to be broken down further so each aspect can be assigned to one of the first three categories. If you like absolute estimates, here you go. But don´t do deep estimates. Don´t estimate dozens of issues; don´t think ahead (“Issue A is a Tonight, then B will be a Tomorrow, after that it´s C as a Noon, finally D is a Tonight - that´s what I´ll do this week.”). Just estimate so Work-in-Progress (WIP) is 1 for everybody - plus a small number of buffer issues. To be blunt: Yes, this makes promises impossible as to what a team will deliver in terms of scope at a certain date in the future. But it will give a Product Owner a clear picture of what to pull for acceptance feedback tonight and tomorrow. Trust through reliability Our trade is lacking trust. Customers don´t trust software companies/departments much. Managers don´t trust developers much. I find that perfectly understandable in the light of what we´re trying to accomplish: delivering software in the face of uncertainty by means of material good production. Customers as well as managers still expect software development to be close to production of houses or cars. But that´s a fundamental misunderstanding. Software development ist development. It´s basically research. As software developers we´re constantly executing experiments to find out what really provides value to users. We don´t know what they need, we just have mediated hypothesises. That´s why we cannot reliably deliver on preposterous demands. So trust is out of the window in no time. If we switch to delivering in short cycles, though, we can regain trust. Because estimates - explicit or implicit - up to 32 hours at most can be satisfied. I´d say: reliability over scope. It´s more important to reliably deliver what was promised then to cover a lot of requirement area. So when in doubt promise less - but deliver without delay. Deliver on scope (Functionality and Quality); but also deliver on Evolvability, i.e. on inner quality according to accepted principles. Always. Trust will be the reward. Less complexity of communication will follow. More goodwill buffer will follow. So don´t wait for some Kanban board to show you, that flow can be improved by scheduling smaller stories. You don´t need to learn that the hard way. Just start with small batch sizes of three different sizes. Fast feedback What has been finished can be checked for acceptance. Why wait for a sprint of several weeks to end? Why let the mental model of the issue and its solution dissipate? If you get final feedback after one or two weeks, you hardly remember what you did and why you did it. Resoning becomes hard. But more importantly youo probably are not in the mood anymore to go back to something you deemed done a long time ago. It´s boring, it´s frustrating to open up that mental box again. Learning is harder the longer it takes from event to feedback. Effort can be wasted between event (finishing an issue) and feedback, because other work might go in the wrong direction based on false premises. Checking finished issues for acceptance is the most important task of a Product Owner. It´s even more important than planning new issues. Because as long as work started is not released (accepted) it´s potential waste. So before starting new work better make sure work already done has value. By putting the emphasis on acceptance rather than planning true pull is established. As long as planning and starting work is more important, it´s a push process. Accept a Noon issue on the same day before leaving. Accept a Tonight issue before leaving today or first thing tomorrow morning. Accept a Tomorrow issue tomorrow night before leaving or early the day after tomorrow. After acceptance the developer(s) can start working on the next issue. Flexibility As if reliability/trust and fast feedback for less waste weren´t enough economic incentive, there is flexibility. After each issue the Product Owner can change course. If on Monday morning feature slices A, B, C, D, E were important and A, B, C were scheduled for acceptance by Monday evening and Tuesday evening, the Product Owner can change her mind at any time. Maybe after A got accepted she asks for continuation with D. But maybe, just maybe, she has gotten a completely different idea by then. Maybe she wants work to continue on F. And after B it´s neither D nor E, but G. And after G it´s D. With Spinning every 32 hours at latest priorities can be changed. And nothing is lost. Because what got accepted is of value. It provides an incremental value to the customer/user. Or it provides internal value to the Product Owner as increased knowledge/decreased uncertainty. I find such reactivity over commitment economically very benefical. Why commit a team to some workload for several weeks? It´s unnecessary at beast, and inflexible and wasteful at worst. If we cannot promise delivery of a certain scope on a certain date - which is what customers/management usually want -, we can at least provide them with unpredecented flexibility in the face of high uncertainty. Where the path is not clear, cannot be clear, make small steps so you´re able to change your course at any time. Premature completion Customers/management are used to premeditating budgets. They want to know exactly how much to pay for a certain amount of requirements. That´s understandable. But it does not match with the nature of software development. We should know that by now. Maybe there´s somewhere in the world some team who can consistently deliver on scope, quality, and time, and budget. Great! Congratulations! I, however, haven´t seen such a team yet. Which does not mean it´s impossible, but I think it´s nothing I can recommend to strive for. Rather I´d say: Don´t try this at home. It might hurt you one way or the other. However, what we can do, is allow customers/management stop work on features at any moment. With spinning every 32 hours a feature can be declared as finished - even though it might not be completed according to initial definition. I think, progress over completion is an important offer software development can make. Why think in terms of completion beyond a promise for the next 32 hours? Isn´t it more important to constantly move forward? Step by step. We´re not running sprints, we´re not running marathons, not even ultra-marathons. We´re in the sport of running forever. That makes it futile to stare at the finishing line. The very concept of a burn-down chart is misleading (in most cases). Whoever can only think in terms of completed requirements shuts out the chance for saving money. The requirements for a features mostly are uncertain. So how does a Product Owner know in the first place, how much is needed. Maybe more than specified is needed - which gets uncovered step by step with each finished increment. Maybe less than specified is needed. After each 4–32 hour increment the Product Owner can do an experient (or invite users to an experiment) if a particular trait of the software system is already good enough. And if so, she can switch the attention to a different aspect. In the end, requirements A, B, C then could be finished just 70%, 80%, and 50%. What the heck? It´s good enough - for now. 33% money saved. Wouldn´t that be splendid? Isn´t that a stunning argument for any budget-sensitive customer? You can save money and still get what you need? Pull on practices So far, in addition to more trust, more flexibility, less money spent, Spinning led to “doing less” which also means less code which of course means higher Evolvability per se. Last but not least, though, I think Spinning´s short acceptance cycles have one more effect. They excert pull-power on all sorts of practices known for increasing Evolvability. If, for example, you believe high automated test coverage helps Evolvability by lowering the fear of inadverted damage to a code base, why isn´t 90% of the developer community practicing automated tests consistently? I think, the answer is simple: Because they can do without. Somehow they manage to do enough manual checks before their rare releases/acceptance checks to ensure good enough correctness - at least in the short term. The same goes for other practices like component orientation, continuous build/integration, code reviews etc. None of that is compelling, urgent, imperative. Something else always seems more important. So Evolvability principles and practices fall through the cracks most of the time - until a project hits a wall. Then everybody becomes desperate; but by then (re)gaining Evolvability has become as very, very difficult and tedious undertaking. Sometimes up to the point where the existence of a project/company is in danger. With Spinning that´s different. If you´re practicing Spinning you cannot avoid all those practices. With Spinning you very quickly realize you cannot deliver reliably even on your 32 hour promises. Spinning thus is pulling on developers to adopt principles and practices for Evolvability. They will start actively looking for ways to keep their delivery rate high. And if not, management will soon tell them to do that. Because first the Product Owner then management will notice an increasing difficulty to deliver value within 32 hours. There, finally there emerges a way to measure Evolvability: The more frequent developers tell the Product Owner there is no way to deliver anything worth of feedback until tomorrow night, the poorer Evolvability is. Don´t count the “WTF!”, count the “No way!” utterances. In closing For sustainable software development we need to put Evolvability first. Functionality and Quality must not rule software development but be implemented within a framework ensuring (enough) Evolvability. Since Evolvability cannot be measured easily, I think we need to put software development “under pressure”. Software needs to be changed more often, in smaller increments. Each increment being relevant to the customer/user in some way. That does not mean each increment is worthy of shipment. It´s sufficient to gain further insight from it. Increments primarily serve the reduction of uncertainty, not sales. Sales even needs to be decoupled from this incremental progress. No more promises to sales. No more delivery au point. Rather sales should look at a stream of accepted increments (or incremental releases) and scoup from that whatever they find valuable. Sales and marketing need to realize they should work on what´s there, not what might be possible in the future. But I digress… In my view a Spinning cycle - which is not easy to reach, which requires practice - is the core practice to compensate the immeasurability of Evolvability. From start to finish of each issue in 32 hours max - that´s the challenge we need to accept if we´re serious increasing Evolvability. Fortunately higher Evolvability is not the only outcome of Spinning. Customer/management will like the increased flexibility and “getting more bang for the buck”.

    Read the article

  • Installing Python Script, Maintaining Reference to Python 2.6

    - by zfranciscus
    Hi, I am writing a Python program that relies on version 2.6. I went through the distribution documentation: http://docs.python.org/distutils/index.html and what I have figure out so far is that I basically need to write a setup.py script. Something like: setup(name='Distutils', version='1.0', description='Python Distribution Utilities', author='My Name', author_email='My Email', url='some URL', package_dir={'': 'src'}, packages=[''], ) I would like to ensure that my program uses 2.6 interpreter library. What would be the best approach to ensure that my program uses 2.6 ? Shall I distribute python 2.6 library along with my program ? Is there any alternative approach ?

    Read the article

  • running VisualStudio CF3.5 apps on WINCE

    - by marc
    Hi, I am trying to write a program for a chinese PNA device with wince 5.0. But everytime I write a simple program in VS8 with C# and 'deploy' it to my device it just doesn't run. First it complains about disposing an object call menu, although I don't want an menu but VS8 just creates one for me. If I delete the menu from the form the program gives an exception. I installed a program call MIOPocket on the PNA with has powertoys some games and MS media player. It also creates a directory .net framework 3.5 so I known 3.5 is installed and must be working. But I think I am missing something. I am also not sure what to choice as target device ; windows mobile or WINCE. If I click the .exe file under win7 it works but under wince its a no go. Maybe someone has a clue what is going wrong ?

    Read the article

  • PerformanceCounters on .NET 4.0 & Windows 7

    - by scott
    I have a program that works fine on VS2008 and Vista, but I'm trying it on Windows 7 and VS2010 / .NET Framework 4.0 and it's not working. Ultimately the problem is that System.Diagnostics.PerformanceCounterCategory.GetCategories() (and other PerformanceCounterCategory methods) is not working. I'm getting a System.InvalidOperationException with the message "Cannot load Counter Name data because an invalid index '' was read from the registry." I can reproduce this with the very simple program shown below: class Program { static void Main(string[] args) { foreach (var pc in System.Diagnostics.PerformanceCounterCategory.GetCategories()) { Console.WriteLine(pc.CategoryName); } } } I did make sure I'm running the program as an admin. It doesn't matter if I run it with VS/Debugger attached or not. I don't have another machine with Windows 7 or VS2010 to test it on, so I'm not sure which is complicating things here (or both?). It is Windows 7 x64 and I've tried forcing the app to run in both x32 and x64 but get the same results.

    Read the article

  • Python Ctypes Read/WriteProcessMemory() - Error 5/998 Help!

    - by user299805
    Please don't get scared but the following code, if you are familiar with ctypes or C it should be easy to read. I have been trying to get my ReadProcessMemory() and WriteProcessMemory() functions to be working for so long and have tried almost every possibility but the right one. It launches the target program, returns its PID and handle just fine. But I always get a error code of 5 - ERROR_ACCESS_DENIED. When I run the read function(forget the write for now). I am launching this program as what I believe to be a CHILD process with PROCESS_ALL_ACCESS or CREATE_PRESERVE_CODE_AUTHZ_LEVEL. I have also tried PROCESS_ALL_ACCESS and PROCESS_VM_READ when I open the handle. I can also say that it is a valid memory location because I can find it on the running program with CheatEngine. As for VirtualQuery() I get an error code of 998 - ERROR_NOACCESS which further confirms my suspicion of it being some security/privilege problem. Any help or ideas would be very appreciated, again, it's my whole program so far, don't let it scare you =P. from ctypes import * from ctypes.wintypes import BOOL import binascii BYTE = c_ubyte WORD = c_ushort DWORD = c_ulong LPBYTE = POINTER(c_ubyte) LPTSTR = POINTER(c_char) HANDLE = c_void_p PVOID = c_void_p LPVOID = c_void_p UNIT_PTR = c_ulong SIZE_T = c_ulong class STARTUPINFO(Structure): _fields_ = [("cb", DWORD), ("lpReserved", LPTSTR), ("lpDesktop", LPTSTR), ("lpTitle", LPTSTR), ("dwX", DWORD), ("dwY", DWORD), ("dwXSize", DWORD), ("dwYSize", DWORD), ("dwXCountChars", DWORD), ("dwYCountChars", DWORD), ("dwFillAttribute",DWORD), ("dwFlags", DWORD), ("wShowWindow", WORD), ("cbReserved2", WORD), ("lpReserved2", LPBYTE), ("hStdInput", HANDLE), ("hStdOutput", HANDLE), ("hStdError", HANDLE),] class PROCESS_INFORMATION(Structure): _fields_ = [("hProcess", HANDLE), ("hThread", HANDLE), ("dwProcessId", DWORD), ("dwThreadId", DWORD),] class MEMORY_BASIC_INFORMATION(Structure): _fields_ = [("BaseAddress", PVOID), ("AllocationBase", PVOID), ("AllocationProtect", DWORD), ("RegionSize", SIZE_T), ("State", DWORD), ("Protect", DWORD), ("Type", DWORD),] class SECURITY_ATTRIBUTES(Structure): _fields_ = [("Length", DWORD), ("SecDescriptor", LPVOID), ("InheritHandle", BOOL)] class Main(): def __init__(self): self.h_process = None self.pid = None def launch(self, path_to_exe): CREATE_NEW_CONSOLE = 0x00000010 CREATE_PRESERVE_CODE_AUTHZ_LEVEL = 0x02000000 startupinfo = STARTUPINFO() process_information = PROCESS_INFORMATION() security_attributes = SECURITY_ATTRIBUTES() startupinfo.dwFlags = 0x1 startupinfo.wShowWindow = 0x0 startupinfo.cb = sizeof(startupinfo) security_attributes.Length = sizeof(security_attributes) security_attributes.SecDescriptior = None security_attributes.InheritHandle = True if windll.kernel32.CreateProcessA(path_to_exe, None, byref(security_attributes), byref(security_attributes), True, CREATE_PRESERVE_CODE_AUTHZ_LEVEL, None, None, byref(startupinfo), byref(process_information)): self.pid = process_information.dwProcessId print "Success: CreateProcess - ", path_to_exe else: print "Failed: Create Process - Error code: ", windll.kernel32.GetLastError() def get_handle(self, pid): PROCESS_ALL_ACCESS = 0x001F0FFF PROCESS_VM_READ = 0x0010 self.h_process = windll.kernel32.OpenProcess(PROCESS_VM_READ, False, pid) if self.h_process: print "Success: Got Handle - PID:", self.pid else: print "Failed: Get Handle - Error code: ", windll.kernel32.GetLastError() windll.kernel32.SetLastError(10000) def read_memory(self, address): buffer = c_char_p("The data goes here") bufferSize = len(buffer.value) bytesRead = c_ulong(0) if windll.kernel32.ReadProcessMemory(self.h_process, address, buffer, bufferSize, byref(bytesRead)): print "Success: Read Memory - ", buffer.value else: print "Failed: Read Memory - Error Code: ", windll.kernel32.GetLastError() windll.kernel32.CloseHandle(self.h_process) windll.kernel32.SetLastError(10000) def write_memory(self, address, data): count = c_ulong(0) length = len(data) c_data = c_char_p(data[count.value:]) null = c_int(0) if not windll.kernel32.WriteProcessMemory(self.h_process, address, c_data, length, byref(count)): print "Failed: Write Memory - Error Code: ", windll.kernel32.GetLastError() windll.kernel32.SetLastError(10000) else: return False def virtual_query(self, address): basic_memory_info = MEMORY_BASIC_INFORMATION() windll.kernel32.SetLastError(10000) result = windll.kernel32.VirtualQuery(address, byref(basic_memory_info), byref(basic_memory_info)) if result: return True else: print "Failed: Virtual Query - Error Code: ", windll.kernel32.GetLastError() main = Main() address = None main.launch("C:\Program Files\ProgramFolder\Program.exe") main.get_handle(main.pid) #main.write_memory(address, "\x61") while 1: print '1 to enter an address' print '2 to virtual query address' print '3 to read address' choice = raw_input('Choice: ') if choice == '1': address = raw_input('Enter and address: ') if choice == '2': main.virtual_query(address) if choice == '3': main.read_memory(address) Thanks!

    Read the article

  • How do I guarantee cleanup code runs in Windows C++ (SIGINT, bad alloc, and closed window)

    - by Meekohi
    I have a Windows C++ console program, and if I don't call ReleaseDriver() at the end of my program, some pieces of hardware enter a bad state and can't be used again without rebooting. I'd like to make sure ReleaseDriver() gets runs even if the program exits abnormally, for example if I hit Ctrl+C or close the console window. I can use signal() to create a signal handler for SIGINT. This works fine, although as the program ends it pops up an annoying error "An unhandled Win32 exception occurred...". I don't know how to handle the case of the console window being closed, and (more importantly) I don't know how to handle exceptions caused by bad memory accesses etc. Thanks for any help!

    Read the article

  • PartCover 2.5.3 win 7 x64

    - by user329814
    Could you tell me how you got PartCover running with VS2008 and win 7 x64? Based on this post http://stackoverflow.com/questions/256287/how-do-i-run-partcover-in-x64-windows, I ran c:\Program Files (x86)\Gubka Bob\PartCover .NET 2.3>CorFlags.exe PartCover.exe / 32BIT+ /Force with result Microsoft (R) .NET Framework CorFlags Conversion Tool. Version 3.5.21022.8 Copyright (c) Microsoft Corporation. All rights reserved. corflags : warning CF011 : The specified file is strong name signed. Using /Force will invalidate the signature of this image and will require the assembly to be resigned. and c:\Program Files (x86)\NUnit 2.5.2\bin\net-2.0>CorFlags.exe nunit.exe /32BIT+ /Force with result Microsoft (R) .NET Framework CorFlags Conversion Tool. Version 3.5.21022.8 Copyright (c) Microsoft Corporation. All rights reserved. Also, based on my discussion http://stackoverflow.com/questions/2546340/using-partcover-2-3-with-net-4-0-runtime/2964333#2964333, I also tried to use the x86 version of NUnit What I'm trying to run coverage for is the c# money sample for NUnit 2.5.2 I get the same System.Threading.ThreadInterruptedException --- System.Runtime.InteropServices.COMException (0x80040153): Retrieving the COM class factory for component with CLSID {FB20430E-CDC9-45D7-8453-272268002E08} failed due to the following error: 80040153 Thank you Edit: same thing with PartCover 2.2 My settings: exe file: C:\Program Files (x86)\NUnit 2.5.2\bin\net-2.0\nunit-console-x86.exe working dir: c:\Program Files (x86)\NUnit 2.5.2\samples\csharp\money\ work arg: /config=c:\Program Files (x86)\NUnit 2.5.2\samples\csharp\money\cs-money.csproj rules: +[]

    Read the article

  • Building Web Application project using MSBuild from command line on 64-bit: missing targets file

    - by James Allen
    Building a solution containing a web application project using MSBuild from powershell like this: msbuild "/p:OutDir=$build_dir\" $solution_file Works fine for me on 32-bit, but on a 64-bit machine I am running into this error: error MSB4019: The imported project "C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the declaration is correct, and that the file exists on disk. I am using Visual Studio 2008 and powershell v2. The problem has already been documented here and here. Basically on 64-bit install of VS, the Microsoft.WebApplication.targets needed by MSBuild is in the Program Files(x86) dir, not the Program Files dir, but MSBuild doesn't recognise this and so looks in the wrong place. The two solutions are not ideal: Manually copy the file on 64-bit from Program Files(x86) to Program Files. This is a poor solution - every dev will have to do this manually. Manually edit the csproj file so MSBuild looks in the right place. Again not ideal: I would rather not have to get everyone on 64bit to manually edit csproj files on every new project. e.g. <Import Project="$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)" Condition="Exists('$(MSBuildExtensionsPathx86)\$(WebAppTargetsSuffix)')" /> Ideally I want a way to tell MSBuild to import the target file form the right place from the command line but I can't work out how to do that. Any solutions?

    Read the article

  • Add Command prompt in VS 2008 Express Edition manually

    - by Kumar
    Hi all, To add the Command prompt in VS 2008 express edition, i have done the following steps: Tools-ExternalTools-Click on Add- Then I have entered the following information. Title: Visual Studio 2008 Command Prompt Command: cmd.exe Arguments: %comspec% /k ""C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"" x86 Initial Directory: $(ProjectDir) Then OK/Apply: After this when I went to Tools Menu and click on Visual Studio 2008 Command Prompt, command prompt open but got the following error message: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"' is not recognized as an internal or external command, operable program or batch file. C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE Please somebody help me to fix this problem.. Or somebody teach me freshly how to add command prompt in Tools Menu manually in VS 2008 Express Edition. Thanks, Kumar

    Read the article

  • Environment variable names with parentheses, like %ProgramFiles(x86)%, in PowerShell?

    - by jwfearn
    How does one get the value of environment variable whose name contains parentheses in a PowerShell script? To complicate matters, some variables names contains parentheses while others have similar names without parenteses. For example (using cmd.exe): C:\>set | find "ProgramFiles" CommonProgramFiles=C:\Program Files\Common Files CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files ProgramFiles=C:\Program Files ProgramFiles(x86)=C:\Program Files (x86) We see that %ProgramFiles% is not the same as %ProgramFiles(x86)%. My PowerShell code is failing in a weird way because it's ignoring the part of the environment variable name after the parentheses. Since this happens to match the name of a different, but existing, environment variable I don't fail, I just get the right value of the wrong variable. Here's a test function in the PowerShell scripting language to illustrate my problem: function Do-Test { $ok = "C:\Program Files (x86)" # note space between 's' and '( $bad = "$Env:ProgramFiles" + "(x86)" # uses %ProgramFiles% $bin32 = "$Env:ProgramFiles(x86)" # LINE 6, I want to use %ProgramFiles(x86)% if ( $bin32 -eq $ok ) { Write-Output "Pass" } elseif ( $bin32 -eq $bad ) { Write-Output "Fail: %ProgramFiles% used instead of %ProgramFiles(x86)%" } else { Write-Output "Fail: some other reason" } } And here's the output: PS> Do-Test Fail: %ProgramFiles% used instead of %ProgramFiles(x86)% Is there a simple change I can make to line 6 above to get the correct value of %ProgramFiles(x86)%? *NOTE: In the text of this post I am using batch file syntax for environment variables as a convenient shorthand. For example %SOME_VARIABLE% means "the value of the environment variable whose name is SOME_VARIABLE". If I knew the properly escaped syntax in PowerShell, I wouldn't need to ask this question.*

    Read the article

  • PHP Exec command - How to pass input to a series of questions

    - by user556597
    I have a program on my linux server that asks the same series of questions each time it executes and then provides several lines of output. My goal is to automate the input and output with a php script. I know how to capture the output in an array by writing: $out = array(); exec("my/path/program",$out); But how do I handle the input? Assume the program asks 3 questions and valid answers are: left 120 n What is the easiest way using php to pass that input to the program? Can I do it somehow on the exec line? I’m not a php noob but simply have never needed to do this before. Alas, my googling is going in circles.

    Read the article

< Previous Page | 315 316 317 318 319 320 321 322 323 324 325 326  | Next Page >