Search Results

Search found 74760 results on 2991 pages for 'print to file'.

Page 1687/2991 | < Previous Page | 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694  | Next Page >

  • Start or ensure that Delayed Job runs when an application/server restarts.

    - by btelles
    Hi there, We have to use delayed_job (or some other background-job processor) to run jobs in the background, but we're not allowed to change the boot scripts/boot-levels on the server. This means that the daemon is not guaranteed to remain available if the provider restarts the server (since the daemon would have been started by a capistrano recipe that is only run once per deployment). Currently, the best way I can think of to ensure the delayed_job daemon is always running, is to add an initializer to our Rails application that checks if the daemon is running. If it's not running, then the initializer starts the daemon, otherwise, it just leaves it be. The question, therefore, is how do we detect that the Delayed_Job daemon is running from inside a script? (We should be able to start up a daemon fairly easily, bit I don't know how to detect if one is already active). Anyone have any ideas? Regards, Bernie Based on the answer below, this is what I came up with. Just put it in config/initializers and you're all set: #config/initializers/delayed_job.rb DELAYED_JOB_PID_PATH = "#{Rails.root}/tmp/pids/delayed_job.pid" def start_delayed_job Thread.new do `ruby script/delayed_job start` end end def process_is_dead? begin pid = File.read(DELAYED_JOB_PID_PATH).strip Process.kill(0, pid.to_i) false rescue true end end if !File.exist?(DELAYED_JOB_PID_PATH) && process_is_dead? start_delayed_job end

    Read the article

  • C++ templates - matrix class

    - by lastOfMohicans
    Hi I'm learning templates in C++ so I decied to write matrix class which would be a template class. In Matrix.h file I wrote #pragma once #include "stdafx.h" #include <vector> using namespace std; template<class T> class Matrix { public: Matrix(); ~Matrix(); GetDataVector(); SetDataVector(vector<vector<T>> dataVector); bool operator == (Matrix* matrix); bool operator < (Matrix* matrix); bool operator <= (Matrix* matrix); bool operator > (Matrix* matrix); bool operator >= (Matrix* matrix); Matrix* operator + (Matrix* matrix); Matrix* operator - (Matrix* matrix); Matrix* operator * (Matrix* matrix); private: vector<vector<T>> datavector; int columns,rows; }; In Matrix cpp Visual Stuio automaticlly generated code for default constructors #include "StdAfx.h" #include "Matrix.h" Matrix::Matrix() { } Matrix::~Matrix() { } However if I want to compile this I get an error 'Matrix' : use of class template requires template argument list The error are in file Matrix.cpp in default constructors What may be the problem ??

    Read the article

  • Pre Commit Hook for JSLint in Mercurial and Git

    - by jrburke
    I want to run JSLint before a commit into either a Mercurial or Git repo is done. I want this as an automatic step that is set up instead of relying on the developer (mainly me) remembering to run JSLint before-hand. I normally run JSLint while developing, but want to specify a contract on JS files that they pass JSLint before being committed to the repo. For Mercurial, this page spells out the precommit syntax, but the only variables that seem to be available are the parent1 and parent2 changeset IDs involved in the commit. What I really want are a list of file names that are involved with the commit, so that I can then choose the .js file and run jslint over them. Similar issue for GIT, the default info available as part of the precommit script seems limited. What might work is calling hg status/git status as part of the precommit script, parse that output to find JS files then do the work that way. I was hoping for something easier though, and I am not sure if calling status as part of a precommit hook reflect the correct information. For instance in Git if the changes files have not been added yet, but the git commit uses -a, would the files show up in the correct section of the git status output as being part of the commit set? Update: I got something working, it is visible here: http://github.com/jrburke/dvcs_jslint/

    Read the article

  • How to scrape a _private_ google group?

    - by John
    Hi there, I'd like to scrape the discussion list of a private google group. It's a multi-page list and I might have to this later again so scripting sounds like the way to go. Since this is a private group, I need to login in my google account first. Unfortunately I can't manage to login using wget or ruby Net::HTTP. Surprisingly google groups is not accessible with the Client Login interface, so all the code samples are useless. My ruby script is embedded at the end of the post. The response to the authentication query is a 200-OK but no cookies in the response headers and the body contains the message "Your browser's cookie functionality is turned off. Please turn it on." I got the same output with wget. See the bash script at the end of this message. I don't know how to workaround this. am I missing something? Any idea? Thanks in advance. John Here is the ruby script: # a ruby script require 'net/https' http = Net::HTTP.new('www.google.com', 443) http.use_ssl = true path = '/accounts/ServiceLoginAuth' email='[email protected]' password='topsecret' # form inputs from the login page data = "Email=#{email}&Passwd=#{password}&dsh=7379491738180116079&GALX=irvvmW0Z-zI" headers = { 'Content-Type' => 'application/x-www-form-urlencoded', 'user-agent' => "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/533.2 (KHTML, like Gecko) Chrome/6.0"} # Post the request and print out the response to retrieve our authentication token resp, data = http.post(path, data, headers) puts resp resp.each {|h, v| puts h+'='+v} #warning: peer certificate won't be verified in this SSL session Here is the bash script: # A bash script for wget CMD="" CMD="$CMD --keep-session-cookies --save-cookies cookies.tmp" CMD="$CMD --no-check-certificate" CMD="$CMD --post-data='[email protected]&Passwd=topsecret&dsh=-8408553335275857936&GALX=irvvmW0Z-zI'" CMD="$CMD --user-agent='Mozilla'" CMD="$CMD https://www.google.com/accounts/ServiceLoginAuth" echo $CMD wget $CMD wget --load-cookies="cookies.tmp" http://groups.google.com/group/mygroup/topics?tsc=2

    Read the article

  • PrintableArea in C# - Bug?

    - by Brandi
    I am having an issue with PageSettings.PrintableArea's width and height values. Width, Height, and Size properties claim to "get or set" the values. Also, the inflate() function claims to change the size based on values passed in. However, all of these attempts to change the value have not worked. Inflate() is ignore (no error, just passes as if it worked, but the values remain unchanged. Attempting to set the height, width, or size gives a compiler error: "Cannot modify the return value of 'System.Drawing.Printing.PageSettings.PrintableArea' because it is not a variable". I get the feeling that this means the "or set" part of the description is a lie. Why I want to know this: (Someone always asks...) I have a printing application (C#, WinForm) that for most things is working rather well. I can set the printer settings and page settings objects to control what displays in the print dialog's printer properties. However, with Microsoft Office Document Image Writer, these settings are sometimes ignored, and the paper size returns as 0, 0 even when it displayed something else. All I really want it for it to be WYSIWYG as far as the displayed values go, so I change the paper size back to what it should be, but the printable area, if it is wrong, makes the resulting image wonky. The resulting image is the size of the printable area instead of the value in papersize. Just wondering if there was a reason for this or a way to get it not to do that. Thanks in advance. :)

    Read the article

  • How do I convert an NSMutableString to NSString when using Frameworks?

    - by BWHazel
    I have written an Objective-C framework which builds some HTML code with NSMutableString which returns the value as an NSString. I have declared an NSString and NSMutableString in the inteface .h file: NSString *_outputLanguage; // Tests language output NSMutableString *outputBuilder; NSString *output; This is a sample from the framework implementation .m code (I cannot print the actual code as it is proprietary): -(NSString*)doThis:(NSString*)aString num:(int)aNumber { if ([outputBuilder length] != 0) { [outputBuilder setString:@""]; } if ([_outputLanguage isEqualToString:@"html"]) { [outputBuilder appendString:@"Some Text..."]; [outputBuilder appendString:aString]; [outputBuilder appendString:[NSString stringWithFormat:@"%d", aNumber]]; } else if ([_outputLanguage isEqualToString:@"xml"]) { [outputBuilder appendString:@"Etc..."]; } else { [outputBuilder appendString:@""]; } output = outputBuilder; return output; } When I wrote a text program, NSLog simply printed out "(null)". The code I wrote there was: TheClass *instance = [[TheClass alloc] init]; NSString *testString = [instance doThis:@"This String" num:20]; NSLog(@"%@", testString); [instance release]; I hope this is enough information!

    Read the article

  • Using db4o with multiple application instances under medium trust

    - by Anders Fjeldstad
    I recently stumbled over the object database engine db4o which I think looks really interesting. I would like to use it in an ASP.NET MVC application that will be deployed to a shared hosting environment under medium trust. Because of the trust level, I'm restricted to using db4o in embedded/in-process mode. That in itself should be no problem, but the hosting provider also transparently runs each web application in multiple (load-balanced) server instances with shared storage, which I would say is normally a very nice feature for a $10/month hoster. However, since an instance of a db4o server with write access (whether in-process or networked) locks the underlying database file, having multiple instances of the application using the same file won't work (or at least I can't see how it would). So the question is: is it possible to use db4o in this specific environment? I have considered letting each application have its own database which is synchronized with a master database using replication (dRS), but that approach will most likely end up with very frequent bi-directional replication (read master changes at beginning of each request, write to master after each change) which I don't think will be very efficient. Summary of the web application/environment characteristics: Read-intensive (but not entirely read-only) Some delay (a few seconds) is acceptible between the time that a change is made and the time when the change shows up in all the application instances' data Must run in medium trust No guarantee that the load-balancer uses "sticky sessions" All suggestions are much appreciated!

    Read the article

  • HTML5 video (mp4 and ogv) problems in Safari and Firefox - but Chrome is all good

    - by qryss
    Hi folks, I have the following code: <video width="640" height="360" controls id="video-player" poster="/movies/poster.png"> <source src="/movies/640x360.m4v" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"'> <source src="/movies/640x360.ogv" type='video/ogg; codecs="theora, vorbis"'> </video> I'm using Rails (Mongrel in development and Mongrel+Apache in production). Chrome (Mac and Win) can play either file (tested by one then the other source tags) whether locally or from my production servers. Safari (Mac and Win) can play the mp4 file fine locally but not from production. Firefox 3.6 won't play the video in either OS. I just get a grey cross in the middle of the video player area. I've made sure that both Mongrel and Apache in each case have the right MIME types set. From Chrome's results I know there is nothing inherently wrong with my video files or the way the files are being asked for or delivered. Anyone got any clues? Or even a clue as to how to diagnose the problem? For Firefox I looked at https://developer.mozilla.org/En/Using_audio_and_video_in_Firefox where it refers to an 'error' event and an 'error' attribute. It seems the 'error' event is thrown pretty well straightaway and at that time there is no error attribute. Very helpful... :( Help enormously appreciated! Thanks in advance... Chris

    Read the article

  • DBI::DBD package not getting installed for Perl?

    - by Liju Mathew
    Hi, We are using a Perl utility to dump data from DB2 database. We installed DBI package and it is asking for DBD package also. We dont have root access and when we try to install DBD package we are getting the following error. ERROR BUILDING DB2.pm [lijumathew@intblade03 DBD-DB2-1.78]$ make make[1]: Entering directory '/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" Constants.c Running Mkbootstrap for DBD::DB2::Constants () chmod 644 Constants.bs rm -f ../blib/arch/auto/DBD/DB2/Constants/Constants.so gcc -shared -L/usr/local/lib Constants.o -o ../blib/arch/auto/DBD/DB2/Constants/Constants.so chmod 755 ../blib/arch/auto/DBD/DB2/Constants/Constants.so cp Constants.bs ../blib/arch/auto/DBD/DB2/Constants/Constants.bs chmod 644 ../blib/arch/auto/DBD/DB2/Constants/Constants.bs make[1]: Leaving directory `/home/lijumathew/lperl/perlsrc/DBD-DB2-1.78/Constants' gcc -c -I"/db2/db2tf1/sqllib/include" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/vendor_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -I"/usr/lib/perl5/site_perl/5.8.5/i386-linux-thread-multi/auto/DBI" -D_REENTRANT -D_GNU_SOURCE -DDEBUGGING -fno-strict-aliasing -pipe -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -I/usr/include/gdbm -O2 -g -pipe -m32 -march=i386 -mtune=pentium4 -DVERSION=\"1.78\" -DXS_VERSION=\"1.78\" -fPIC "-I/usr/lib/perl5/5.8.5/i386-linux-thread-multi/CORE" DB2.c In file included from DB2.h:22, from DB2.xs:7: dbdimp.h:10:22: dbivport.h: No such file or directory make: *** [DB2.o] Error 1 How to fix this? Do we need root access to resolve this? Appreciate the help in advance. Thanks, Mathew Liju

    Read the article

  • VB.Net plugin using Matlab COM Automation Server...Error: 'Could not load Interop.MLApp'

    - by Ben
    My Problem: I am using Matlab COM Automation Server to call and execute matlab .m files from a VB.Net plugin for a CAD program called Rhino 3D. The code works flawlessly when set up as a simple Windows Application in Visual Studio, but when I insert it (and make the requisite reference) into my .Net plugin and test it in the CAD program I get the following error: "Could not load file or assembly 'Interop.MLApp, Version 1.0.0.0, culture=neutral, PublicKeyToken=null' or one of its dependencies. the system cannot find the file specified." What I've Tried: I am baffled as to why this occurs, but I was able to contact the CAD program's technical support staff and they suggested that it has something to do with their DotNet SDK having trouble with references that are located far outside the CAD program directory. They didn't have any solutions so I tried playing around with copylocal and this made no difference. I tried using other COM libraries and the Open Office automation server works fine, although uses url's instead of requiring a reference. I also tested Excel, which does require a reference, and it returned the error: "retrieving the COM class factory for component with CLSID {...} failed due to the following error: 80040154." This may or may not be related to the issue with the Matlab COM reference, but I thought was worthwhile to share. Perhaps is there another way to reference Interop.MLApp? I would appreciate any suggestions or thoughts on how I might make the Matlab Interop.MLApp reference work. Best regards, Ben

    Read the article

  • Should I redesign my code when my colleague says so?

    - by Kirill V. Lyadvinsky
    I wrote a function recently (with help of one guy from SO) that finds maximum of two ints. Here is the code: long get_max (long(*a)(long(*)(long(*)()),long(*)(long(*)(long**))), long(*b)(long(*) (long(*)()),long*,long(*)(long(*)()))){return (long)((((long(*)(long(*)(long(*)()),long( *)(long(*)())))a)> ((long(*)(long(*)(long(*)()),long(*)(long(*)())))b))?((long(*)( long(*)(long(*)()),long(*)(long(*)())))a):((long(*)(long(*)(long(*)()),long(*)(long(*)( ))))b));} int main() { long x = get_max( (long(*)(long(*)(long(*)()),long(*)(long(*)(long**)))) 500, (long(*)(long(*)(long(*)()),long*,long(*)(long(*)()))) 100 ); cout << x << endl; // print 500 as expected return 0; } It works fine, but my colleague says that I shouldn't use C style casts. But I think that all that modern static_cast's and reinterpret_cast's will make my code too cumbersome. Who's right? Should I redesign my code using C++ style casts or is original code OK? EDIT: For those who marks this question as not a question I'll try to be more clear: should I use C++ style cast instead of C style cast in the code above?

    Read the article

  • OpenCV to JNI how to make it work?

    - by padfoot-4444
    I am tring to use opencv and java for face detection, and in that pursit i found this "JNI2OPENCV" file....but i am confused on how to make it work, can anyone help me? http://img519.imageshack.us/img519/4803/askaj.jpg and the following is the FaceDetection.java `class JNIOpenCV { static { System.loadLibrary("JNI2OpenCV"); } public native int[] detectFace(int minFaceWidth, int minFaceHeight, String cascade, String filename); } public class FaceDetection { private JNIOpenCV myJNIOpenCV; private FaceDetection myFaceDetection; public FaceDetection() { myJNIOpenCV = new JNIOpenCV(); String filename = "lena.jpg"; String cascade = "haarcascade_frontalface_alt.xml"; int[] detectedFaces = myJNIOpenCV.detectFace(40, 40, cascade, filename); int numFaces = detectedFaces.length / 4; System.out.println("numFaces = " + numFaces); for (int i = 0; i < numFaces; i++) { System.out.println("Face " + i + ": " + detectedFaces[4 * i + 0] + " " + detectedFaces[4 * i + 1] + " " + detectedFaces[4 * i + 2] + " " + detectedFaces[4 * i + 3]); } } public static void main(String args[]) { FaceDetection myFaceDetection = new FaceDetection(); } }` CAn anyone tell me how can i make this work on Netbeans?? I tried Google but help on this particular topic is very meger. I have added the whole folder as Llibrary in netbeans project and whe i try to run the file i get the followig wrroes. Exception in thread "main" java.lang.UnsatisfiedLinkError: FaceDetection.JNIOpenCV.detectFace(IILjava/lang/String;Ljava/lang/String;)[I at FaceDetection.JNIOpenCV.detectFace(Native Method) at FaceDetection.FaceDetection.<init>(FaceDetection.java:19) at FaceDetection.FaceDetection.main(FaceDetection.java:29) Java Result: 1 BUILD SUCCESSFUL (total time: 2 seconds) CAn anyone tell me the exact way to work with this? like what all i have to do?

    Read the article

  • Getting Attributes of Keychain Items

    - by rgov
    I'm trying to get the attributes of a keychain item. This code should look up all the available attributes, then print off their tags and contents. According to the docs I should be seeing tags like 'cdat', but instead they just look like an index (i.e., the first tag is 0, next is 1). This makes it pretty useless since I can't tell which attribute is the one I'm looking for. SecItemClass itemClass; SecKeychainItemCopyAttributesAndData(itemRef, NULL, &itemClass, NULL, NULL, NULL); SecKeychainRef keychainRef; SecKeychainItemCopyKeychain(itemRef, &keychainRef); SecKeychainAttributeInfo *attrInfo; SecKeychainAttributeInfoForItemID(keychainRef, itemClass, &attrInfo); SecKeychainAttributeList *attributes; SecKeychainItemCopyAttributesAndData(itemRef, attrInfo, NULL, &attributes, 0, NULL); for (int i = 0; i < attributes->count; i ++) { SecKeychainAttribute attr = attributes->attr[i]; NSLog(@"%08x %@", attr.tag, [NSData dataWithBytes:attr.data length:attr.length]); } SecKeychainFreeAttributeInfo(attrInfo); SecKeychainItemFreeAttributesAndData(attributes, NULL); CFRelease(itemRef); CFRelease(keychainRef);

    Read the article

  • How to get correct Set-Cookie headers for NSHTTPURLResponse?

    - by overboming
    I want to use the following code to login to a website which returns its cookie information in the following manner: Set-Cookie: 19231234 Set-Cookie: u2am1342340 Set-Cookie: owwjera I'm using the following code to log in to the site, but the print statement at the end doesn't output anything about "set-cookie". On Snow leopard, the library seems to automatically pick up the cookie for this site and later connections sent out is set with correct "cookie" headers. But on leopard, it doesn't work that way, so is that a trigger for this "remember the cookie for certain root url" behavior? NSMutableURLRequest *request = [[[NSMutableURLRequest alloc] init] autorelease]; [request setURL:[NSURL URLWithString:uurl]]; [request setHTTPMethod:@"POST"]; [request setValue:postLength forHTTPHeaderField:@"Content-Length"]; [request setValue:@"application/x-www-form-urlencoded" forHTTPHeaderField:@"Content-Type"]; [request setValue:@"keep-live" forHTTPHeaderField:@"Connection"]; [request setValue:@"300" forHTTPHeaderField:@"Keep-Alive"]; [request setHTTPShouldHandleCookies:YES]; [request setHTTPBody:postData]; [request setTimeoutInterval:10.0]; NSData *urlData; NSHTTPURLResponse *response; NSError *error; urlData = [NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]; NSLog(@"response dictionary %@",[response allHeaderFields]);

    Read the article

  • Writing tests for Rails plugins

    - by Adam
    I'm working on a plugin for Rails that would add limited in-memory caching to ActiveRecord's finders. The functionality itself is mature enough, but I can't for the life of me get unit tests to work with the plugin. I now have under vendor/plugins/my_plugin/test/my_plugin_test.rb a standard subclass of ActiveSupport::TestCase with a couple of basic tests. I try running 'rake test' from the plugin directory, and I have confirmed that this task loads the ruby file with the test case, but it doesn't actually run any of the tests. I followed the Rails plugin guide (http://guides.rubyonrails.org/plugins.html) where applicable, but it seems to be horribly outdated (it suggests things that Rails now do automatically, etc.) The only output I get is this: Kakadu:ingenious_record adam$ rake test (in /Users/adam/Sites/1_PRK/vendor/plugins/ingenious_record) /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby -Ilib:lib:test "/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake/rake_test_loader.rb" "test/ingenious_record_test.rb" The simplest test case looks like this: require 'test_helper' require 'active_record' class IngeniousRecordTest < ActiveSupport::TestCase test "example" do assert false end end This should definitely produce at least some output, and the only test in that file should produce a failed assertion. Any ideas what I could do to get Rails to run my tests?

    Read the article

  • Programatically insert a Word document into an existing document (Word 2007)

    - by cjb
    I have a Word 2007 document that I want to insert an exsiting Word document into - while preserving the header/footer, graphics, borders etc of both documents. I'm doing this using the Word API in C#. It sounds pretty simple, I mean surely you just use the "InsertFile" method... except that in Word 2007 the "insert file" functionality now is actually "insert text from file" and it does just that - leaving out the page border, graphics and footer etc. OK then, I'll use copy and paste instead, like so... _Document sourceDocument = wordApplication.Documents.Open(insert the 8 million by ref parameters Word requries) sourceDocument.Activate(); // This is the document I am copying from wordApplication.Selection.WholeStory(); wordApplication.Selection.Copy(); targetDocument.Activate(); // This is the document I am pasting into wordApplication.Selection.InsertBreak(wdSectionBreakNextPage); Selection.PasteAndFormat(wdFormatOriginalFormatting); wordApplication.Selection.InsertBreak(wdSectionBreakNextPage); which does what you would expect, takes the source document, selects everything, copies it then pastes it into the target document. Because I've added a section break before doing the paste it also preserves the borders, header/footer of both documents. However - now this is where I have the problem. The paste only includes the borders, header etc if I paste at the end of the target document. If I paste it in the middle - despite there being a preceding section break, then only the text gets pasted and the header and borders etc are lost. Can anyone help before I buy a grenade and a one way ticket to Redmond...

    Read the article

  • Save HTML to Pdf ABCPdf 4

    - by daisy
    Hello All, I'm using following code to save html to pdf file. But it fails to compile at if (!theDoc.Chainable(theID)). I do have using WebSupergoo.ABCpdf4; added at the begining of the code. Is this version issue? Is there any other method to save HTML string to pdf file in ABCPdf 4. Error Message is "'WebSupergoo.ABCpdf4.Doc' does not contain a definition for 'Chainable' and no extension method 'Chainable' accepting a first argument of type 'WebSupergoo.ABCpdf4.Doc' could be found (are you missing a using directive or an assembly reference?) Doc theDoc = new Doc(); theDoc.Rect.Inset(10, 50); theDoc.Page = theDoc.AddPage(); int theID; theID = theDoc.AddImageHtml(str); while (true) { if (!theDoc.Chainable(theID)) break; theDoc.Page = theDoc.AddPage(); theID = theDoc.AddImageToChain(theID); } for (int i = 1; i <= theDoc.PageCount; i++) { theDoc.PageNumber = i; theDoc.Flatten(); } theDoc.Save("path where u want 2 save" + ".pdf"); theDoc.Clear(); All the help is appreciated.

    Read the article

  • How to propertly reference a namespace for Microsoft.Sdc.Tasks.XmlFile.GetValue

    - by æther
    Hi, i want to use MSBuild to insert a custom xml element into web.config. After looking up online, i found such solution: 1) Store element in the .build file in projectextensions <ProjectExtensions> <CustomElement name="CustomElementName"> ... </CustomElement> </ProjectExtensions> 2) Retrieve the element with GetValue <Target name="ModifyConfig"> <XmlFile.GetValue Path="$(MSBuildProjectFullPath)" XPath="Project/ProjectExtensions/CustomElement[@name='CustomElementName']"> <Output TaskParameter="Value" PropertyName="CustomElementProperty"/> </XmlFile.GetValue> </Target> This will not work as i need to reference a namespace the .build project is using for it to find the needed element (checked the .build file with XPath Visualizer). So, i look up for a further solution and come to this: <ItemGroup> <XmlNamespace Include="MSBuild"> <Prefix>msb</Prefix> <Uri>http://schemas.microsoft.com/developer/msbuild/2003</Uri> </XmlNamespace> </ItemGroup> <Target name="ModifyConfig"> <XmlFile.GetValue Path="$(MSBuildProjectFullPath)" Namespaces="$(XmlNamespace)" XPath="/msb:Project/msb:ProjectExtensions/msb:CustomElement[@name='CustomElementName']" > <Output TaskParameter="Value" PropertyName="CustomElementProperty"/> </XmlFile.GetValue> </Target> But for some reason namespace is not recognized - MSBuild reports the following error: C:...\mybuild.build(53,9): error : A task error has occured. C:...\mybuild.build(53,9): error : Message = Namespace prefix 'msb' is not defined. I tried some variations of referencing it differently but none work, and there is not much about propertly referencing those namespaces online also. Can you tell me what am i doing wrong and how to do it propertly?

    Read the article

  • Visual Studio 2008 Preprocessor wierdness

    - by Canacourse
    We have set-up a simple versioning system for our builds to ensure the built files always indicate whether they are Beta Debug or Beta Release builds I moved the file version info to to myapp.rc2 and created version.h // version.h // _DEBUG is defined by VS #define _BETA #ifdef _BETA #define FILE_DESC1 _T("BETA ") #else #define FILE_DESC1 // blank on purpose #endif #ifdef _DEBUG #define FILE_DESC2 _T("Debug Version ") #else #define FILE_DESC2 _T("Release Version ") // this is greyed out in the ide when building #endif #define FILE_DESC FILE_DESC1 FILE_DESC2 // myapp.rc2 include "version.h" #ifndef _T #define _T(x) x #endif VS_VERSION_INFO VERSIONINFO FILEVERSION PROD_VER_MJR,PROD_VER_MIN,PROD_VER_UPD,JOBUILDER_BUILD PRODUCTVERSION PROD_VER_MJR,PROD_VER_MIN FILEFLAGSMASK 0x3fL #ifdef _DEBUG FILEFLAGS 0x1L #else FILEFLAGS 0x0L #endif FILEOS 0x4L FILETYPE 0x1L FILESUBTYPE 0x0L BEGIN BLOCK "StringFileInfo" BEGIN BLOCK "040904e4" BEGIN VALUE "CompanyName", COMPANY_NAME VALUE "FileDescription", FILE_DESC VALUE "FileVersion", JOBBUI_VERSION VALUE "InternalName", "MyApp.exe" VALUE "LegalCopyright", COPYRIGHT VALUE "OriginalFilename", "MyApp.exe" VALUE "ProductName", PRODUCT_NAME VALUE "ProductVersion", PRODUCT_VERSION VALUE "Comments", COMMENTS END END BLOCK "VarFileInfo" BEGIN VALUE "Translation", 0x409, 1252 END END However when the exe is built in the debug output directory the file description always incorrectly says "BETA Release Version" instead of "BETA Debug Version" Yet the IDE indicates that "#define FILE_DESC2 _T("Debug Version ")" would be used. Why might this be? I have used these files on another project and they work correctly. Thank You...

    Read the article

  • Using the public ssh key from local machine to access two remote users [closed]

    - by Nick
    I have an new Ubuntu (Hardy 8.04) server; it has two users, Alice and Bob. Alice is listed in sudoers. I appended my public ssh key (my local machine's public key local/Users/nick/.ssh/id_rsa.pub) to authorized_keys in remote_server/home/Alice/.ssh/authorized_keys, changed the permissions on Alice/.ssh/ to 700 and Alice/.ssh/authorized_keys to 600, and both the file and folder are owned my Alice. Then added I Alice to sshd_config (AllowUsers Alice). This works and I can login into Alice: ssh -v [email protected] ... debug1: Offering public key: /Users/nick/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 277 debug1: Authentication succeeded (publickey). debug1: channel 0: new [client-session] debug1: Entering interactive session. Last login: Mon Mar 15 09:51:01 2010 from 123.456.789.00 I then copied the authorized_keys file remote_server/home/Alice/.ssh/authorized_keys to remote_server/home/Bob/.shh/authorized_keys and changed the permissions and ownership and added Bob to AllowUsers in sshd_config (AllowUsers Alice Bob). Now when I try to login to Bob it will not authenticate the same public key. ssh -v [email protected] ... debug1: Offering public key: /Users/nick/.ssh/id_rsa debug1: Authentications that can continue: publickey debug1: Trying private key: /Users/nick/.ssh/identity debug1: Trying private key: /Users/nick/.ssh/id_dsa debug1: No more authentication methods to try. Permission denied (publickey). Am I missing something fundamental about the way ssh works?

    Read the article

  • iPhone UnitTesting UITextField value and otest error 133

    - by Justin Galzic
    Are UITextFields not meant to be part of the LogicTests and instead part of the ApplicationTest target? I have a factory class that is responsible for creating and returning an (iPhone) UITextField and I'm trying to unit test it. It is part of my Logic Test target and when I try to build and run the tests, I get a build error about: /Developer/Tools/RunPlatformUnitTests.include:451:0 Test rig '/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator3.1.2.sdk/ 'Developer/usr/bin/otest' exited abnormally with code 133 (it may have crashed). In the build window, this points to the following line in: "RunPlatformUnitTests.include" RPUTIFail ${LINENO} "Test rig '${TEST_RIG}' exited abnormally with code ${TEST_RIG_RESULT} (it may have crashed)." My unit test looks like this: #import <SenTestingKit/SenTestingKit.h> #import <UIKit/UIKit.h> // Test-subject headers. #import "TextFieldFactory.h" @interface TextFieldFactoryTests : SenTestCase { } @end @implementation TextFieldFactoryTests #pragma mark Test Setup/teardown - (void) setUp { NSLog(@"%@ setUp", self.name); } - (void) tearDown { NSLog(@"%@ tearDown", self.name); } #pragma mark Tests - (void) testUITextField_NotASecureField { NSLog(@"%@ start", self.name); UITextField *textField = [TextFieldFactory createTextField:YES]; NSLog(@"%@ end", self.name); } The class I'm trying to test: // Header file #import <Foundation/Foundation.h> #import <UIKit/UIKit.h> @interface TextFieldFactory : NSObject { } +(UITextField *)createTextField:(BOOL)isSecureField; @end // Implementation file #import "TextFieldFactory.h" @implementation TextFieldFactory +(UITextField *)createTextField:(BOOL)isSecureField { // x,y,z,w are constants declared else where UITextField *textField = [[[UITextField alloc] initWithFrame:CGRectMake(x, y, z, w)] autorelease]; // some initialization code return textField; } @end

    Read the article

  • What components and IDE add-ins do you install with Delphi?

    - by Mick
    After a clean install of Delphi, what components and IDE add-ins do you make certain that you install? What's your Delphi "rig"? Here's what I install after a clean installation: Delphi 2007 JCL / JVCL - JEDI Code Library and JEDI Visual Code Library (600+ components) JWA / JWSCL - JEDI API Library & Security Code Library GExperts - GExperts is a free set of tools built to increase the productivity of Delphi and C++Builder programmers by adding several features to the IDE. TWM's experimental GExperts code formatter - adds code formatting capabilities to Delphi Virtual TreeView - Virtual Treeview is a treeview control built from ground up. More than 5 years of development made it one of the most flexible and advanced tree controls available today. MustangPeak Components (EasyList View, Virtual ShellTools, etc) - EasyListview is a control that has no dependance on the Microsoft Listview control but has all the features of the latest version from Microsoft. Also includes 'Explorer.exe' like shell components. Synapse lightweight networking components - contains simple low level non-visual objects for easy programming without problems. (no required multi-threaded synchronization, no need for windows message processing,…) Great for command line utilities, visual projects, NT services EurekaLog - EurekaLog is a complete bug resolution tool for Delphi and C++Builder developers that gives your application the power to catch every exception and memory leak, directly on the end user PC, generating a detailed log of the call stack (with file, class, method and line number), optionally sending you a copy of each log entry via email or to a web bug-tracker. DelphiSpeedUp - DelphiSpeedUp is an IDE plugin for Delphi and C++Builder. It improves the IDE’s startup speed and increases the general speed of the whole IDE. DDevExtensions - DDevExtensions extends the Delphi/C++Builder IDE by adding some new productivity features. IDE Fix Pack - The IDE Fix Pack installs is a DLL-Expert that fixes the following RAD Studio 2007 bugs at runtime. All changes are done in memory. No file on disk is modified. TPerlRegex - Regular Expression library for Delphi How about other Delphi developers?

    Read the article

  • Excel Plug-In Assembly Loading Problem (Access Denied)

    - by PlagueEditor
    I am developing an Excel 2003 add-in using Visual Studio 2008. My add-in loads fine; however, it loads plug-ins from other C# DLL's. I would like this to be done dynamically at run time so referencing them during development is something I would rather not do. Anyways, anytime I try to load a DLL from the Excel add-in at start up, it throws a security exception. This particular example is HTML Agility Pack. It's not a plug-in but a plug-in's dependency. But nonetheless it won't even load: {System.IO.FileLoadException: Could not load file or assembly 'HtmlAgilityPack, Version=1.4.0.0, Culture=neutral, PublicKeyToken=bd319b19eaf3b43a' or one of its dependencies. Failed to grant permission to execute. (Exception from HRESULT: 0x80131418) File name: 'HtmlAgilityPack, Version=1.4.0.0, Culture=neutral, PublicKeyToken=bd319b19eaf3b43a' ---> System.Security.Policy.PolicyException: Execution permission cannot be acquired. at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Boolean checkExecutionPermission) at System.Security.SecurityManager.ResolvePolicy(Evidence evidence, PermissionSet reqdPset, PermissionSet optPset, PermissionSet denyPset, PermissionSet& denied, Int32& securitySpecialFlags, Boolean checkExecutionPermission) at System.Reflection.Assembly.nLoadFile(String path, Evidence evidence) at System.Reflection.Assembly.LoadFile(String path) at Cjack.Druid.SourcePluginManager.LoadPlugin(String filePath) in C:\Documents and Settings\Annie Tormey\My Documents\Visual Studio 2008\Projects\DruidAddin2003\Druid\SourcePluginManager.cs:line 26 } This is extremely frustrating because it runs perfectly fine for Office 2010 and as a standalone application. Thank-you to anyone who can give me an answer as to why this is happening or a solution to fix it. Thank-you for your time.

    Read the article

  • Cannot add SourceSafe Database as Visual Studio 2010 source control.

    - by CletusLoomis
    My issue is that I cannot add SourceSafe Database for source control within Visual Studio 2010. Our team was initially using VSS for source control in Visual Studio 2010. During an evaluation of TFS, I switched my source control to TFS. It will be a few weeks before a decision is made on TFS, so I needed to switch my source control back to VSS. However I'm now unable to add a SourceSafe Database in Visual Studio. Steps to Reproduce in Visual Studio 2010: 1) Access the 'Open SourceSafe Database' form via Tools-Options-Source Control-Plug-in Settings--Advanced or via File-Source Control 2) The list of available database is blank so I choose 'Browse'. 3) I browse to the srcsafe.ini file for my VSS database and select it. 4) I'm promted to confirm the Database Name - Click OK. 5) The database does not appear in the 'Open SourceSafe' Database form. The list of available databases is still blank. Note that I can add the database fine outside of Visual Studio using VSS directly. However the databases I add via VSS do not appear in the Visual Studio forms. I'm suspicious that this is related to "down-grading" from TFS to VSS which may not have been heavily tested at MS. Any assistance is appreciated.

    Read the article

  • Zend_Translate scan translation files

    - by nute
    I've trying to use Zend_Translate from Zend Framework I am using "POEdit" to generate "gettext" translation files. My files are under /www/mysite.com/webapps/includes/locale (this path is in my include path). I have: pictures.en.mo pictures.en.po (I plan on having pictures.es.mo soon) It all works fine if I manually do addTranslation() for each file. However I want to use the automatic file scanning method. I tried both of those: <?php /*Localization*/ require_once 'Zend/Translate.php'; require_once 'Zend/Locale.php'; define('LOCALE','/www/mysite.com/webapps/includes/locale'); if(!empty($_GET['locale'])){ $locale = new Zend_Locale($_GET['locale']); } else{ $locale = new Zend_Locale(); } $translate = new Zend_Translate('gettext', LOCALE, null, array('scan' => Zend_Translate::LOCALE_FILENAME)); if ( $translate->isAvailable( $locale->getLanguage() ) ){ $translate->setLocale($locale); } else{ $translate->setLocale('en'); } And this: <?php /*Localization*/ require_once 'Zend/Translate.php'; require_once 'Zend/Locale.php'; define('LOCALE','/www/mysite.com/webapps/includes/locale'); if(!empty($_GET['locale'])){ $locale = new Zend_Locale($_GET['locale']); } else{ $locale = new Zend_Locale(); } $translate = new Zend_Translate('gettext', LOCALE); if ( $translate->isAvailable( $locale->getLanguage() ) ){ $translate->setLocale($locale); } else{ $translate->setLocale('en'); } In both cases, I get a Notice: No translation for the language 'en' available. in /www/mysite.com/webapps/includes/Zend/Translate/Adapter.php on line 411 It also worked if I tried to do directory scanning.

    Read the article

< Previous Page | 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694  | Next Page >