Search Results

Search found 22841 results on 914 pages for 'aspect orientated program'.

Page 800/914 | < Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >

  • Build a Visual Studio Project without access to referenced dlls

    - by David Reis
    I have a project which has a set of binary dependencies (assembly dlls for which I do no have the source code). At runtime those dependencies are required pre-installed on the machine and at compile time they are required in the source tree, e,g in a lib folder. As I'm also making source code available for this program I would like to enable a simple download and build experience for it. Unfortunately I cannot redistribute the dlls, and that complicates things, since VS wont link the project without access to the referenced dlls. Is there anyway to enable this project to be built and linked in absence of the real referenced dlls? Maybe theres a way to tell VS to link against an auto generated stub of the dll, so that it can rebuild without the original? Maybe there's a third party tool that will do this? Any clues or best practices at all in this area? I realize the person must have access to the dlls to run the code, so it makes sense that he could add them to the build process, but I'm just trying to save them the pain of collecting all the dlls and placing them in the lib folder manually.

    Read the article

  • What is your favorite API developer community site? And why? [closed]

    - by whatupwilly
    There are a lot of great sites out there that offer good documentation, tools, tips, best-practices, sample code, etc. for the API's they are publishing. A sample: http://apiwiki.twitter.com http://developer.netflix.com/ http://developers.facebook.com/ https://affiliate-program.amazon.com/gp/advertising/api/detail/main.html http://code.google.com/ http://remix.bestbuy.com/ http://www.flickr.com/services/api/misc.overview.html http://products.wolframalpha.com/api/webserviceapi.html There are some no-brainers that I think a good developer site should have: Hi level introduction Quick start guide API specific details - showing example request and responses Links to sample code and/or 3rd party libraries Developer registration (e.g. get an API key) Blog But what about some other things: Online-Forum or Msg Board vs. Google Group (or similar) Galleries/ShowCases - spotlighting great apps built on the API - who has done nice galleries? Community Wiki - How do people feel about letting the community have edit rights on API documentation pages Online testing tools (like Facebook has a lot of nice interactive tools to simulate request/responses) What are some packages that you would recommend to put this all together: pbwiki Google Group pages MediaWiki API vendor package such as Sonoa Systems that offers a customizable developer portal So, to summarize: What are some other great API developer portals out there What are some nice features you like on them Any recommendations on what to use to build these features out Thanks, Will Zappos.com Public API (soon to launch) Product Manager

    Read the article

  • Duplicate Symbol Linker Error (C++ help)

    - by Vash265
    Hi. I'm learning some CSP (constraint satisfaction) theory stuff right now, and am using this library to parse XML files. I'm using Xcode as an IDE. My program compiles fine, but when it goes to link the files, I get a duplicate symbol error with the XMLParser_libxml2.hh file. My files are separated as such: A class header file that includes the XMLParser file above A class implementation file that include the class header file A main file that includes the class header file The duplicate symbol is occurring in main.o and classfile.o, but as far as I can tell, I'm not actually adding that .hh file twice. Full error: ld: duplicate symbol bool CSPXMLParser::UTF8String::to<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >(std::basic_string<char, std::char_traits<char>, std::allocator<char> >&) constin /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/dStructFill.o and /Users/vash265/CSP/Untitled/build/Untitled.build/Debug/Untitled.build/Objects-normal/x86_64/main.o Copying the implementation of the class into the main file and taking the class implementation file out of the compilation target alleviates the error, but it's a disorganized mess this way, and I'll be adding more classes very soon (and it would be nice to have them in separate files). As I've come to understand it, this is caused by the file (XMLParser_libxml2.hh) having both the class and function definition and implementation in one file (and it seems as though this might have been necessary due to the use of templates in that 'header' file). Any ideas on how to get around sticking all my class files in my main.cpp? (I've tried ifdefs, they don't work).

    Read the article

  • CGContextDrawImage returning bad access

    - by Marcelo
    Hello guys, I've been trying to blend two UIImage for about 2 days now and I've been getting some BAD_ACCESS errors. First of all, I have two images that have the same orientation, basically I'm using the CoreGraphics to do the blending. One curious detail, everytime I modify the code, the first time I compile and run it on device, I get to do everything I want without any sort of trouble. Once I restart the application, I get error and the program shuts down. Can anyone give me a light? I tried accessing the baseImage sizes dynamically, but it gives me bad access too. Here's a snippet of how I'm doing the blending. UIGraphicsBeginImageContext(CGSizeMake(320, 480)); CGContextRef context = UIGraphicsGetCurrentContext(); CGContextTranslateCTM(context, 0, 480); CGContextScaleCTM(context, 1.0, -1.0); CGContextDrawImage(context, rect, [baseImage CGImage]); CGContextSetBlendMode(context, kCGBlendModeOverlay); CGContextDrawImage(context, rect, [tmpImage CGImage]); [transformationView setImage:UIGraphicsGetImageFromCurrentImageContext()]; UIGraphicsEndImageContext();

    Read the article

  • XmlDocument.Load() throws XmlSchemaValidationException

    - by Praetorian
    Hi, I'm trying to validate an XML document against a schema (which is embedded in my program as a resource). I got everything to work, so I tried to test for errors by adding a second sibling node in the XML at a location where the schema specifies maxOccurs="1". The problem is that my ValidationEventHandler is never getting called, also XmlDocument.Load() is throwing an XmlSchemaValidationException exception when I'd expected XmlDocument.Validate() to do that. This is the code I have: private void ValidateUserData( string xmlPath ) { var resInfo = Application.GetResourceStream( new Uri( @"MySchema.xsd", UriKind.Relative ) ); var schema = XmlSchema.Read( resInfo.Stream, SchemaValidationCallBack ); XmlSchemaSet schemaSet = new XmlSchemaSet(); schemaSet.Add( schema ); schemaSet.ValidationEventHandler += SchemaValidationCallBack; XmlReaderSettings settings = new XmlReaderSettings(); settings.Schemas = schemaSet; settings.ValidationType = ValidationType.Schema; XmlDocument doc = new XmlDocument(); using( XmlReader reader = XmlReader.Create( xmlPath, settings ) ) { doc.Load( reader ); // <-- This line throws an exception if XML is ill-formed reader.Close(); } doc.Validate( SchemaValidationCallBack );// <-- This is never reached } private void SchemaValidationCallBack( object sender, ValidationEventArgs e ) { Console.WriteLine( "SchemaValidationCallBack: " + e.Message ); } How do I get the callback to be called so I can handle validation errors? Thanks for your help!

    Read the article

  • linux raw socket programming question

    - by user194420
    Hi all, I am trying to create a raw socket which send and receive message with ip/tcp header under linux. I can successfully binds to a port and receive tcp message(ie:syn) However, the message seems to be handled by the os, but not mine. I am just a reader of it(like wireshark). My raw socket binds to port 8888, and then i try to telnet to that port . In wireshark, it shows that the port 8888 reply a "rst ack" when it receive the "syn" request. In my program, it shows that it receive a new message and it doesnot reply with any message. Any way to actually binds to that port?(prevent os handle it) Here is part of my code, i try to cut those error checking for easy reading sockfd = socket(AF_INET, SOCK_RAW, IPPROTO_TCP); int tmp = 1; const int *val = &tmp; setsockopt (sockfd, IPPROTO_IP, IP_HDRINCL, val, sizeof (tmp)); servaddr.sin_family = AF_INET; servaddr.sin_addr.s_addr = htonl(INADDR_ANY); servaddr.sin_port = htons(8888); bind(sockfd, (struct sockaddr*)&servaddr, sizeof(servaddr)); //call recv in loop

    Read the article

  • WiX built-in WixUI Dialog Sets have horizontal lines that are just a little too short (picture inclu

    - by Coder7862396
    I am creating an installer for my program using WiX (Windows Installer XML). I have used the following code to begin using the built-in WixUI Dialog Sets: <Product ...> <UIRef Id="WixUI_FeatureTree" /> </Product> This, however, creates a dialog set with horizontal lines that are just a little bit too short on every dialog as shown here: I understand that I could create my own set of dialogs to use instead by using software such as SharpSetup and WixEdit but I like the dialogs that WiX creates and only want to make a very small change to them. Is my best option to download the WiX source code and try to modify it? Is there a more simple solution? Perhaps I should contact the developers of WiX to list it as a bug? Maybe they like it that way though. I however think it looks out of place and would like to change it. I am using the latest weekly release of WiX 3.5.

    Read the article

  • Why does first call to java.io.File.createTempFile(String,String,File) take 5 seconds on Citrix?

    - by Ben Roling
    While debugging slow startup of an Eclipse RCP app on a Citrix server, I came to find out that java.io.createTempFile(String,String,File) is taking 5 seconds. It does this only on the first execution and only for certain user accounts. Specifically, I am noticing it Citrix anonymous user accounts. I have not tried many other types of accounts, but this behavior is not exhibited with an administrator account. Also, it does not matter if the user has access to write to the given directory or not. If the user does not have access, the call will take 5 seconds to fail. If they do have access, the call with take 5 seconds to succeed. This is on a Windows 2003 Server. I've tried Sun's 1.6.0_16 and 1.6.0_19 JREs and see the same behavior. I googled a bit expecting this to be some sort of known issue, but didn't find anything. It seems like someone else would have had to have run into this before. The Eclipse Platform uses File.createTempFile() to test various directories to see if they are writeable during initialization and this issue adds 5 seconds to the startup time of our application. I imagine somebody has run into this before and might have some insight. Here is sample code I executed to see that it is indeed this call that is consuming the time. I also tried it with a second call to createTempFile and notice that subsequent calls return nearly instantaneously. public static void main(final String[] args) throws IOException { final File directory = new File(args[0]); final long startTime = System.currentTimeMillis(); File file = null; try { file = File.createTempFile("prefix", "suffix", directory); System.out.println(file.getAbsolutePath()); } finally { System.out.println(System.currentTimeMillis() - startTime); if (file != null) { file.delete(); } } } Sample output of this program is the following: C:\java.exe -jar filetest.jar C:/Temp C:\Temp\prefix8098550723198856667suffix 5093

    Read the article

  • Clone existing structs with different alignment in Visual C++

    - by Crend King
    Is there a way to clone an existing struct with different member alignment in Visual C++? Here is the background: I use an 3rd-party library, which uses several structs. To fill up the structs, I pass the address of the struct instances to some functions. Unfortunately, the functions only returns unaligned buffer, so that data of some members are always wrong. /Zp is out of choice, since it breaks the other parts of the program. I know #pragma pack modifies the alignment of the following struct, but I would like to avoid copying the structs into my code, for the definitions in the library might change in the future. Sample code: test.h: struct am_aligned { BYTE data1[10]; ULONG data2; }; test.cpp: #include "test.h" // typedef alignment(1) struct am_aligned am_unaligned; int APIENTRY wWinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPTSTR lpCmdLine, int nCmdShow) { char buffer[20] = {}; for (int i = 0; i < sizeof(unaligned_struct); i++) { buffer[i] = i; } am_aligned instance = *(am_aligned*) buffer; return 0; } Consider am_aligned is defined in the library header file. am_unaligned is my custom declaration, and only effective in test.cpp. The commented line does not work of course. instance.data2 is 0x0f0e0d0c, while 0x0d0c0b0a is desired. Thanks for help!

    Read the article

  • Is there a standard pattern for scanning a job table executing some actions?

    - by Howiecamp
    (I realize that my title is poor. If after reading the question you have an improvement in mind, please either edit it or tell me and I'll change it.) I have the relatively common scenario of a job table which has 1 row for some thing that needs to be done. For example, it could be a list of emails to be sent. The table looks something like this: ID Completed TimeCompleted anything else... ---- --------- ------------- ---------------- 1 No blabla 2 No blabla 3 Yes 01:04:22 ... I'm looking either for a standard practice/pattern (or code - C#/SQL Server preferred) for periodically "scanning" (I use the term "scanning" very loosely) this table, finding the not-completed items, doing the action and then marking them completed once done successfully. In addition to the basic process for accomplishing the above, I'm considering the following requirements: I'd like some means of "scaling linearly", e.g. running multiple "worker processes" simultaneously or threading or whatever. (Just a specific technical thought - I'm assuming that as a result of this requirement, I need some method of marking an item as "in progress" to avoid attempting the action multiple times.) Each item in the table should only be executed once. Some other thoughts: I'm not particularly concerned with the implementation being done in the database (e.g. in T-SQL or PL/SQL code) vs. some external program code (e.g. a standalone executable or some action triggered by a web page) which is executed against the database Whether the "doing the action" part is done synchronously or asynchronously is not something I'm considering as part of this question.

    Read the article

  • Perl - CodeGolf - Nested loops & SQL inserts

    - by CheeseConQueso
    I had to make a really small and simple script that would fill a table with string values according to these criteria: 2 characters long 1st character is always numeric (0-9) 2nd character is (0-9) but also includes "X" Values need to be inserted into a table on a database The program would execute: insert into table (code) values ('01'); insert into table (code) values ('02'); insert into table (code) values ('03'); insert into table (code) values ('04'); insert into table (code) values ('05'); insert into table (code) values ('06'); insert into table (code) values ('07'); insert into table (code) values ('08'); insert into table (code) values ('09'); insert into table (code) values ('0X'); And so on, until the total 110 values were inserted. My code (just to accomplish it, not to minimize and make efficient) was: use strict; use DBI; my ($db1,$sql,$sth,%dbattr); %dbattr=(ChopBlanks => 1,RaiseError => 0); $db1=DBI->connect('DBI:mysql:','','',\%dbattr); my @code; for(0..9) { $code[0]=$_; for(0..9) { $code[1]=$_; insert(@code); } insert($code[0],"X"); } sub insert { my $skip=0; foreach(@_) { if($skip==0) { $sql="insert into table (code) values ('".$_[0].$_[1]."');"; $sth=$db1->prepare($sql); $sth->execute(); $skip++; } else { $skip--; } } } exit; I'm just interested to see a really succinct & precise version of this logic.

    Read the article

  • SSH with Perl using file handles, not Net::SSH

    - by jorge
    Before I ask the question: I can not use cpan module Net::SSH, I want to but can not, no amount of begging will change this fact I need to be able to open an SSH connection, keep it open, and read from it's stdout and write to its stdin. My approach thus far has been to open it in a pipe, but I have not been able to advance past this, it dies straight away. That's what I have in mind, I understand this causes a fork to occur. I've written code accordingly for this fork (or so I think). Below is a skeleton of what I want, I just need the system to work. #!/usr/bin/perl use warnings; $| = 1; $pid = open (SSH,"| ssh user\@host"); if(defined($pid)){ if(!$pid){ #child while(<>){ print; } }else{ select SSH; $| = 1; select STDIN; #parent while(<>){ print SSH $_; while(<SSH>){ print; } } close(SSH); } } I know, from what it looks like, I'm trying to recreate "system('ssh user@host')," that is not my end goal, but knowing how to do that would bring me much closer to the end goal. Basically, I need a file handle to an open ssh connection where I can read from it the output and write to it input (not necessarily straight from my program's STDIN, anything I want, variables, yada yada) This includes password input. I know about key pairs, part of the end goal involves making key pairs, but the connection needs to happen regardless of their existence, and if they do not exist it's part of my plan to make them exist.

    Read the article

  • Micro Second resolution timestamps on windows.

    - by Nikhil
    How to get micro second resolution timestamps on windows? I am loking for something better than QueryPerformanceCounter, QueryPerformanceFrequency (these can only give you an elapsed time since boot, and are not necessarily accurate if they are called on different threads - ie QueryPerformanceCounter may return different results on different CPUs. There are also some processors that adjust their frequency for power saving, which apparently isn't always reflected in their QueryPerformanceFrequency result.) There is this, http://msdn.microsoft.com/en-us/magazine/cc163996.aspx but it does not seem to be solid. This looks great but its not available for download any more. http://www.ibm.com/developerworks/library/i-seconds/ This is another resource. http://www.lochan.org/2005/keith-cl/useful/win32time.html But requires a number of steps, running a helper program plus some init stuff also, I am not sure if it works on multiple CPUs Also looked at the Wikipedia link on the subject which is interesting but not that useful. http://en.wikipedia.org/wiki/Time_Stamp_Counter If the answer is just do this with BSD or Linux, its a lot easier thats fine, but I would like to confirm this and get some explanation as to why this is so hard in windows and so easy in linux and bsd. Its the same damm hardware...

    Read the article

  • Do I need to implement an XMPP server?

    - by WTFITS
    (newbie alert) I need to program a multiparty communication service for a course project, and I am considering XMPP for it. The service needs following messaging semantics: 1) server will provide a method of registering and unregistering an address such as [email protected]/SomeResource. (for now I will do it manually). 2) server will provide a method of forwarding incoming messages from, say, [email protected]/SomeResource to [email protected]/someOtherResource, assuming that the latter is registered, and a method for removing this forwarding. (for now I will do it manually). 3) anonymous clients can send messages to, say, [email protected]/someresource (one way traffic only). If there is any forwarding setup, the message will be forwarded. Finally if the address is [email protected]/someresource is registered, the message will be stored for later delivery (or immediate if a retrieving client is online - see below). If no forwarding and unregistered, message will be silently dropped. 4) clients can connect and retrieve messages from a registered address. Exact method of authenticating clients (e.g., passwords?) is yet to be determined. Eventually, I want to add support for clients to connect from a web browser so they can register/unregister and set/remove forwarding themselves. Thus, the server will have to do some non-standard switching. Will I need to implement an XMPP server for this? I guess some (or all?) of this can also be done using a XMPP client bot

    Read the article

  • Counting entries in a list of dictionaries: for loop vs. list comprehension with map(itemgetter)

    - by Dennis Williamson
    In a Python program I'm writing I've compared using a for loop and increment variables versus list comprehension with map(itemgetter) and len() when counting entries in dictionaries which are in a list. It takes the same time using a each method. Am I doing something wrong or is there a better approach? Here is a greatly simplified and shortened data structure: list = [ {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'biscuits and gravy'}, {'key1': False, 'dontcare': False, 'ignoreme': False, 'key2': True, 'filenotfound': 'peaches and cream'}, {'key1': True, 'dontcare': False, 'ignoreme': False, 'key2': False, 'filenotfound': 'Abbott and Costello'}, {'key1': False, 'dontcare': False, 'ignoreme': True, 'key2': False, 'filenotfound': 'over and under'}, {'key1': True, 'dontcare': True, 'ignoreme': False, 'key2': True, 'filenotfound': 'Scotch and... well... neat, thanks'} ] Here is the for loop version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True key1 = key2 = 0 for dictionary in list: if dictionary["key1"]: key1 += 1 if dictionary["key2"]: key2 += 1 print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above: Counts: key1: 3, subset key2: 2 Here is the other, perhaps more Pythonic, version: #!/usr/bin/env python # Python 2.6 # count the entries where key1 is True # keep a separate count for the subset that also have key2 True from operator import itemgetter KEY1 = 0 KEY2 = 1 getentries = itemgetter("key1", "key2") entries = map(getentries, list) key1 = len([x for x in entries if x[KEY1]]) key2 = len([x for x in entries if x[KEY1] and x[KEY2]]) print "Counts: key1: " + str(key1) + ", subset key2: " + str(key2) Output for the data above (same as before): Counts: key1: 3, subset key2: 2 I'm a tiny bit surprised these take the same amount of time. I wonder if there's something faster. I'm sure I'm overlooking something simple. One alternative I've considered is loading the data into a database and doing SQL queries, but the data doesn't need to persist and I'd have to profile the overhead of the data transfer, etc., and a database may not always be available. I have no control over the original form of the data. The code above is not going for style points.

    Read the article

  • Trying to create a .NET DLL to be used with Non-.NET Application

    - by Changeling
    I am trying to create a .NET DLL so I can use the cryptographic functions with my non .NET application. I have created a class library so far with this code: namespace AESEncryption { public class EncryptDecrypt { private static readonly byte[] optionalEntropy = { 0x21, 0x05, 0x07, 0x08, 0x27, 0x02, 0x23, 0x36, 0x45, 0x50 }; public interface IEncrypt { string Encrypt(string data, string filePath); }; public class EncryptDecryptInt:IEncrypt { public string Encrypt(string data, string filePath) { byte[] plainKey; try { // Read in the secret key from our cipher key store byte[] cipher = File.ReadAllBytes(filePath); plainKey = ProtectedData.Unprotect(cipher, optionalEntropy, DataProtectionScope.CurrentUser); // Convert our plaintext data into a byte array byte[] plainTextBytes = Encoding.ASCII.GetBytes(data); MemoryStream ms = new MemoryStream(); Rijndael alg = Rijndael.Create(); alg.Mode = CipherMode.CBC; alg.Key = plainKey; alg.IV = optionalEntropy; CryptoStream cs = new CryptoStream(ms, alg.CreateEncryptor(), CryptoStreamMode.Write); cs.Write(plainTextBytes, 0, plainTextBytes.Length); cs.Close(); byte[] encryptedData = ms.ToArray(); return Convert.ToString(encryptedData); } catch (Exception ex) { return ex.Message; } } } } } In my VC++ application, I am using the #import directive to import the TLB file created from the DLL, but the only available functions are _AESEncryption and LIB_AES etc I don't see the interface or the function Encrypt. When I try to instantiate so I can call the functions in my VC++ program, I use this code and get the following error: HRESULT hr = CoInitialize(NULL); IEncryptPtr pIEncrypt(__uuidof(EncryptDecryptInt)); error C2065: 'IEncryptPtr': undeclared identifier error C2146: syntax error : missing ';' before identifier 'pIEncrypt'

    Read the article

  • Is there really such a thing as a char or short in modern programming?

    - by Dean P
    Howdy all, I've been learning to program for a Mac over the past few months (I have experience in other languages). Obviously that has meant learning the Objective C language and thus the plainer C it is predicated on. So I have stumbles on this quote, which refers to the C/C++ language in general, not just the Mac platform. With C and C++ prefer use of int over char and short. The main reason behind this is that C and C++ perform arithmetic operations and parameter passing at integer level, If you have an integer value that can fit in a byte, you should still consider using an int to hold the number. If you use a char, the compiler will first convert the values into integer, perform the operations and then convert back the result to char. So my question, is this the case in the Mac Desktop and IPhone OS environments? I understand when talking about theses environments we're actually talking about 3-4 different architectures (PPC, i386, Arm and the A4 Arm variant) so there may not be a single answer. Nevertheless does the general principle hold that in modern 32 bit / 64 bit systems using 1-2 byte variables that don't align with the machine's natural 4 byte words doesn't provide much of the efficiency we may expect. For instance, a plain old C-Array of 100,000 chars is smaller than the same 100,000 ints by a factor of four, but if during an enumeration, reading out each index involves a cast/boxing/unboxing of sorts, will we see overall lower 'performance' despite the saved memory overhead?

    Read the article

  • Sudoku Recursion Issue (Java)

    - by SkylineAddict
    I'm having an issue with creating a random Sudoku grid. I tried modifying a recursive pattern that I used to solve the puzzle. The puzzle itself is a two dimensional integer array. This is what I have (By the way, the method doesn't only randomize the first row. I had an idea to randomize the first row, then just decided to do the whole grid): public boolean randomizeFirstRow(int row, int col){ Random rGen = new Random(); if(row == 9){ return true; } else{ boolean res; for(int ndx = rGen.nextInt() + 1; ndx <= 9;){ //Input values into the boxes sGrid[row][col] = ndx; //Then test to see if the value is valid if(this.isRowValid(row, sGrid) && this.isColumnValid(col, sGrid) && this.isQuadrantValid(row, col, sGrid)){ // grid valid, move to the next cell if(col + 1 < 9){ res = randomizeFirstRow(row, col+1); } else{ res = randomizeFirstRow( row+1, 0); } //If the value inputed is valid, restart loop if(res == true){ return true; } } } } //If no value can be put in, set value to 0 to prevent program counting to 9 setGridValue(row, col, 0); //Return to previous method in stack return false; } This results in an ArrayIndexOutOfBoundsException with a ridiculously high or low number (+- 100,000). I've tried to see how far it goes into the method, and it never goes beyond this line: if(this.isRowValid(row, sGrid) && this.isColumnValid(col, sGrid) && this.isQuadrantValid(row, col, sGrid)) I don't understand how the array index goes so high. Can anyone help me out?

    Read the article

  • Runing bcdedit from python in Windows 2008 SP2

    - by Lee-Man
    I do not know windows well, so that may explain my dilemma ... I am trying to run bcdedit in Windows 2008R2 from Python 2.6. My Python routine to run a command looks like this: def run_program(cmd_str): """Run the specified command, returning its output as an array of lines""" dprint("run_program(%s): entering" % cmd_str) cmd_args = cmd_str.split() subproc = subprocess.Popen(cmd_args, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) (outf, errf) = (subproc.stdout, subproc.stderr) olines = outf.readlines() elines = errf.readlines() if Options.debug: if elines: dprint('Error output:') for line in elines: dprint(line.rstrip()) if olines: dprint('Normal output:') for line in olines: dprint(line.rstrip()) errf.close() outf.close() res = subproc.wait() dprint('wait result=', res) return (res, olines) I call this function thusly: (res, o) = run_program('bcdedit /set {current} MSI forcedisable') This command works when I type it from a cmd window, and it works when I put it in a batch file and run it from a command window (as Administrator, of course). But when I run it from Python (as Administrator), Python claims it can't find the command, returning: bcdedit is not recognized as an internal or external command, operable program or batch file Also, if I trying running my batch file from Python (which works from the command line), it also fails. I've also tried it with the full path to bcdedit, with the same results. What is it about calling bcdedit from Python that makes it not found? Note that I can call other EXE files from Python, so I have some level of confidence that my Python code is sane ... but who knows. Any help would be most appreciated.

    Read the article

  • Definition of the job titles involved in a software development process.

    - by Rafael Romão
    I have seen many job titles for people involved in a software development process, but never found a consensus about they mean. I know many of them are equivalent, and found some other questions about that here in SO, but I would like to know your definitions and comments about them. I want not only to know if there is really a consensus, but also to know if what I suppose to be a Software Architect, is really a Software Architect, and so on. The job titles I mean are: Developer; System Analyst; Programmer; Analyst Programmer; Software Engineer; Software Architect; Designer; Software Designer; Business Manager; Business Analyst; Program Manager; Project Manager; Development Manager; Tester; Support Analyst; Please, feel free to add more titles to this list in your answers. It would be very helpful.

    Read the article

  • EPL (Eclipse Public Licence) for commercial usage

    - by code-gijoe
    Hi, I'm developing an application which requires a third party framework which is under an Eclipse Public Licence (EPL). The application is a server-side commercial application which will be running on my servers. The EPL software is distributed as binaries (jar files). I'm only using the packages and am not making any contribution, i.e. not making any changes to the source. Under EPL I believe I'm not a "Contributor" nor am I making a "Contribution". But if I want to make my software available to be installed at some offsite server I'm having trouble with REQUIREMENTS of EPL: b.iv - "states that source code for the Program is available from such Contributor, and informs licensees how to obtain it in a reasonable manner on or through a medium customarily used for software exchange". Does this mean that if I where to modify the source code of the 3rd party framework for my own purposes I would need to distribute all of my source code? EPL is supposed to be commercially friendly but it doesn't seem that way to me. Thank you.

    Read the article

  • Garbled text when constructing emails with vmime

    - by Klaus Fiedler
    Hey, my Qt C++ program has a part where it needs to send the first 128 characters or so of the output of a bash command to an email address. The output from the tty is captured in a text box in my gui called textEdit_displayOutput and put into my message I built using the Message Builder ( the object m_vmMessage ) Here is the relevant code snippet: m_vmMessage.getTextPart()->setCharset( vmime::charsets::US_ASCII ); m_vmMessage.getTextPart()->setText( vmime::create < vmime::stringContentHandler > ( ui->textEdit_displayOutput->toPlainText().toStdString() ) ); vmime::ref < vmime::message > msg = m_vmMessage.construct(); vmime::utility::outputStreamAdapter out( std::cout ); msg->generate( out ); Giving bash 'ls /' and a newline makes vmime give terminal output like this: ls /=0Abin etc=09 initrd.img.old mnt=09 sbin=09 tmp=09 vmlinuz.o= ld=0Aboot farts=09 lib=09=09 opt=09 selinux usr=0Acdrom home=09 = lost+found=09 proc srv=09 var=0Adev initrd.img media=09 root = Whereas it should look more like this: ls / bin etc initrd.img.old mnt sbin tmp vmlinuz.old boot farts lib opt selinux usr cdrom home lost+found proc srv var dev initrd.img media root sys vmlinuz 18:22> How do I encode the email properly? Does vmime just display it like that on purpose and the actual content of the email is ok?

    Read the article

  • iPhone SDK math - pythagorean theorem problem!

    - by Flafla2
    Just as a practice, I am working on an app that solves the famous middle school pythagorean theorem, a squared + b squared = c squared. Unfortunately, the out-coming answer has, in my eyes, nothing to do with the actual answer. Here is the code used during the "solve" action. - (IBAction)solve { int legoneint; int legtwoint; int hypotenuseint; int lonesq = legoneint * legoneint; int ltwosq = legtwoint * legtwoint; int hyposq = hypotenuseint * hypotenuseint; hyposq = lonesq + ltwosq; if ([legone.text isEqual:@""]) { legtwoint = [legtwo.text intValue]; hypotenuseint = [hypotenuse.text intValue]; answer.text = [NSString stringWithFormat:@"%d", legoneint]; self.view.backgroundColor = [UIColor blackColor]; } if ([legtwo.text isEqual:@""]) { legoneint = [legone.text intValue]; hypotenuseint = [hypotenuse.text intValue]; answer.text = [NSString stringWithFormat:@"%d", legtwoint]; self.view.backgroundColor = [UIColor blackColor]; } if ([hypotenuse.text isEqual:@""]) { legoneint = [legone.text intValue]; legtwoint = [legtwo.text intValue]; answer.text = [NSString stringWithFormat:@"%d", hypotenuseint]; self.view.backgroundColor = [UIColor blackColor]; } } By the way, legone, legtwo, and hypotenuse all represent the UITextField that corresponds to each mathematical part of the right triangle. Answer is the UILabel that tells, you guessed it, the answer. Does anyone see any flaws in the program? Thanks in advance!

    Read the article

  • Looking to reimplement build toolchain from bash/grep/sed/awk/(auto)make/configure to something more

    - by wash
    I currently maintain a few boxes that house a loosely related cornucopia of coding projects, databases and repositories (ranging from a homebrew *nix distro to my class notes), maintained by myself and a few equally pasty-skinned nerdy friends (all of said cornucopia is stored in SVN). The vast majority of our code is in C/C++/assembly (a few utilities are in python/perl/php, we're not big java fans), compiled in gcc. Our build toolchain typically consists of a hodgepodge of make, bash, grep, sed and awk. Recent discovery of a Makefile nearly as long as the program it builds (as well as everyone's general anxiety with my cryptic sed and awking) has motivated me to seek a less painful build system. Currently, the strongest candidate I've come across is Boost Build/Bjam as a replacement for GNU make and python as a replacement for our build-related bash scripts. Are there any other C/C++/asm build systems out there worth looking into? I've browsed through a number of make alternatives, but I haven't found any that are developed by names I know aside from Boost's. (I should note that an ability to easily extract information from svn commandline tools such as svnversion is important, as well as enough flexibility to configure for builds of asm projects as easily as c/c++ projects)

    Read the article

  • C# Minimize all running windows when application runs

    - by Derek
    I am working on a C# windows form application. How can i edit my code in a way that when more than 2 faces is being detected by my webcam. More information: When "FaceDetectedLabel.Text = "Faces Detected : " + cam.facesdetected.ToString();" becomes Face Detected: 2 or more... How can i do the following: Minimize all program running except my application. Log out of my computer Here is my code: namespace PBD { public partial class MainPage : Form { //declaring global variables private Capture capture; //takes images from camera as image frames public MainPage() { InitializeComponent(); } private void ProcessFrame(object sender, EventArgs arg) { Wrapper cam = new Wrapper(); //show the image in the EmguCV ImageBox WebcamPictureBox.Image = cam.start_cam(capture).Resize(390, 243, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC).ToBitmap(); FaceDetectedLabel.Text = "Faces Detected : " + cam.facesdetected.ToString(); } private void MainPage_Load(object sender, EventArgs e) { #region if capture is not created, create it now if (capture == null) { try { capture = new Capture(); } catch (NullReferenceException excpt) { MessageBox.Show(excpt.Message); } } #endregion Application.Idle += ProcessFrame; }

    Read the article

< Previous Page | 796 797 798 799 800 801 802 803 804 805 806 807  | Next Page >