Search Results

Search found 12269 results on 491 pages for 'tool chain'.

Page 125/491 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Informix, NHibernate, TransactionScope interaction difficulties

    - by John Prideaux
    I have a small program that is trying to wrap an NHibernate insert into an Informix database in a TransactionScope object using the Informix .NET Provider. I am getting the error specified below. The code without the TransactionScope object works -- including when the insert is wrapped in an NHibernate session transaction. Any ideas on what the problem is? BTW, without the EnterpriseServicesInterop, the Informix .NET Provider will not participate in a TransactionScope transaction (verified without NHibernate involved). Code Snippet: public static void TestTScope() { Employee johnp = new Employee { name = "John Prideaux" }; using (TransactionScope tscope = new TransactionScope( TransactionScopeOption.Required, new TransactionOptions() { Timeout = new TimeSpan(0, 1, 0), IsolationLevel = IsolationLevel.ReadCommitted }, EnterpriseServicesInteropOption.Full)) { using (ISession session = OpenSession()) { session.Save(johnp); Console.WriteLine("Saved John to the database"); } } Console.WriteLine("Transaction should be rolled back"); } static ISession OpenSession() { if (factory == null) { Configuration c = new Configuration(); c.AddAssembly(Assembly.GetCallingAssembly()); factory = c.BuildSessionFactory(); } return factory.OpenSession(); } static ISessionFactory factory; Stack Trace: NHibernate.ADOException was unhandled Message="Could not close IBM.Data.Informix.IfxConnection connection" Source="NHibernate" StackTrace: at NHibernate.Connection.ConnectionProvider.CloseConnection(IDbConnection conn) at NHibernate.Connection.DriverConnectionProvider.CloseConnection(IDbConnection conn) at NHibernate.Tool.hbm2ddl.SuppliedConnectionProviderConnectionHelper.Release() at NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.GetReservedWords(Dialect dialect, IConnectionHelper connectionHelper) at NHibernate.Tool.hbm2ddl.SchemaMetadataUpdater.Update(ISessionFactory sessionFactory) at NHibernate.Impl.SessionFactoryImpl..ctor(Configuration cfg, IMapping mapping, Settings settings, EventListeners listeners) at NHibernate.Cfg.Configuration.BuildSessionFactory() at HelloNHibernate.Employee.OpenSession() in D:\Development\ScratchProject\HelloNHibernate\Employee.cs:line 73 at HelloNHibernate.Employee.TestTScope() in D:\Development\ScratchProject\HelloNHibernate\Employee.cs:line 53 at HelloNHibernate.Program.Main(String[] args) in D:\Development\ScratchProject\HelloNHibernate\Program.cs:line 19 at System.AppDomain._nExecuteAssembly(Assembly assembly, String[] args) at System.AppDomain.ExecuteAssembly(String assemblyFile, Evidence assemblySecurity, String[] args) at Microsoft.VisualStudio.HostingProcess.HostProc.RunUsersAssembly() at System.Threading.ThreadHelper.ThreadStart_Context(Object state) at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state) at System.Threading.ThreadHelper.ThreadStart() InnerException: IBM.Data.Informix.IfxException Message="ERROR - no error information available" Source="IBM.Data.Informix" ErrorCode=-2147467259 StackTrace: at IBM.Data.Informix.IfxConnection.HandleError(IntPtr hHandle, SQL_HANDLE hType, RETCODE retcode) at IBM.Data.Informix.IfxConnection.DisposeClose() at IBM.Data.Informix.IfxConnection.Close() at NHibernate.Connection.ConnectionProvider.CloseConnection(IDbConnection conn) InnerException:

    Read the article

  • Public key of Android project and keystore created in Eclipse?

    - by user578056
    I created an Android project using Eclipse (under Windows FWIW) and let Eclipse create the keypair during the Export Android Application process. I successfully used Eclipse to make a signed release build that is now on the Market. Now I want to now use ProGuard, which I believe means using Ant instead of Eclipse to build. It was a pain, but Ant building works in both debug and release, until it tries to sign the APK. I get: [signjar] jarsigner: Certificate chain not found for: redacted. redacted must reference a valid KeyStore key entry containing a private key and corresponding public key certificate chain. keytool -list -keystore redacted gives me: Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry redacted, Jan 16, 2011, PrivateKeyEntry, Certificate fingerprint (MD5): BD:0F:70:C1:39:F5:FE:5B:BC:CD:89:0B:C8:66:95:E0 Which brings me to the actual question: where is my public key? I have some sort of public key on my Android Market profile, but is that the pair for my private key? If so, how do I store that in the keystore so that jarsigner will work?

    Read the article

  • How to prepopulate an Outlook MailItem and avoiding a com Exception from the object model guard

    - by torrente
    Hi, I work for a company that develops a CRM tool and offers integration with MS Office(2003 & 2007) from windows XP to 7. (I'm working using Win7) My task is to call an Outlook instance (using C#) from this CRM tool when the user wants to send an email and prepopulate with data of the CRM tool (email, recipient, etc..) All of this already works just fine. The problem I'm having is that Outlook's "object model guard" is throwing com Exception (Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT))) the moment I try to read a protected value from the mailItem (such as mail.bodyHTML). Example Snippet: using MSOutlook = Microsoft.Office.Interop.Outlook; //untrusted Instance _outlook = new MSOutlook.Application(); MSOutlook.MailItem mail = (MSOutlook.MailItem)_outlook.CreateItem(MSOutlook.OlItemType.olMailItem); //this where the Exception occurs string outlookStdHTMLBody = mail.HTMLBody; I've done quite a bit of reading and know that my Outlook Instance (derived by using new Application) is considered untrusted and therefore the "omg" kicks in. I do have a workaround for development: I'm running VS2010 as Administrator and if I run Outlook as Administrator as well - all is good. I suppose this is due to them having the same integrity levels(high) and the UAC(?) is not complaining. But that just ain't the way to go for deployment. Now the question is: Is there a way to obtain a trusted instance of Outlook so that I can avoid this exception? I've already read that when developing an Office Add-In using VSTO one can obtain a trusted Instance from the OnComplete event and/or using "ThisAddin" But I "merely" want to start an outlook instance and preopulate it, and do not want to develop an Add-In since this is not the requirement. And to make it clear - I have no problem with pop ups informing the user that outlook is being accessed - I just want to get rid of the exception! So how can I get around this problem using code? Any help is highly appreciated! Thomas

    Read the article

  • Project Euler Question 14 (Collatz Problem)

    - by paradox
    The following iterative sequence is defined for the set of positive integers: n -n/2 (n is even) n -3n + 1 (n is odd) Using the rule above and starting with 13, we generate the following sequence: 13 40 20 10 5 16 8 4 2 1 It can be seen that this sequence (starting at 13 and finishing at 1) contains 10 terms. Although it has not been proved yet (Collatz Problem), it is thought that all starting numbers finish at 1. Which starting number, under one million, produces the longest chain? NOTE: Once the chain starts the terms are allowed to go above one million. I tried coding a solution to this in C using the bruteforce method. However, it seems that my program stalls when trying to calculate 113383. Please advise :) #include <stdio.h> #define LIMIT 1000000 int iteration(int value) { if(value%2==0) return (value/2); else return (3*value+1); } int count_iterations(int value) { int count=1; //printf("%d\n", value); while(value!=1) { value=iteration(value); //printf("%d\n", value); count++; } return count; } int main() { int iteration_count=0, max=0; int i,count; for (i=1; i<LIMIT; i++) { printf("Current iteration : %d\n", i); iteration_count=count_iterations(i); if (iteration_count>max) { max=iteration_count; count=i; } } //iteration_count=count_iterations(113383); printf("Count = %d\ni = %d\n",max,count); }

    Read the article

  • JSF ISO-8859-2 charset

    - by Vladimir
    Hi! I have problem with setting proper charset on my jsf pages. I use MySql db with latin2 (ISO-8859-2 charset) and latin2_croatian_ci collation. But, I have problems with setting values on backing managed bean properties. Page directive on top of my page is: <%@ page language="java" pageEncoding="ISO-8859-2" contentType="text/html; charset=ISO-8859-2" %> In head I included: <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-2"> And my form tag is: <h:form id="entityDetails" acceptcharset="ISO-8859-2"> I've created and registered Filter in web.xml with following doFilter method implementation: public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { request.setCharacterEncoding("ISO-8859-2"); response.setCharacterEncoding("ISO-8859-2"); chain.doFilter(request, response); } But, i.e. when I set managed bean property through inputText, all special (unicode) characters are replaced with '?' character. I really don't have any other ideas how to set charset to pages to perform well. Any suggestions? Thanks in advance.

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • Scaling Literate Programming?

    - by Tetha
    Greetings. I have been looking at Literate Programming a bit now, and I do like the idea behind it: you basically write a little paper about your code and write down as much of the design decisions, the code probably surrounding the module, the inner workins of the module, assumptions and conclusions resulting from the design decisions, potential extension, all this can be written down in a nice way using tex. Granted, the first point: it is documentation. It must be kept up-to-date, but that should not be that bad, because your change should have a justification and you can write that down. However, how does Literate Programming Scale to a larger degree? Overall, Literate Programming is still just text. Very human readable text, of course, but still text, and thus, it is hard to follow large systems. For example, I reworked large parts of my compiler to use and some magic to chain compile steps together, because some "x.register_follower(y); y.register_follower(z); y.register_follower(a);..." got really unwieldy, and changing that to x y z a made it a bit better, even though this is at its breaking point, too. So, how does Literate Programming scale to larger systems? Does anyone try to do that? My thought would be to use LP to specify components that communicate with each other using event streams and chain all of these together using a subset of graphviz. This would be a fairly natural extension to LP, as you can extract a documentation -- a dataflow diagram -- from the net and also generate code from it really well. What do you think of it? -- Tetha.

    Read the article

  • What AOP tools exist for doing aspect-oriented programming at the assembly language level against x8

    - by JohnnySoftware
    Looking for a tool I can use to do aspect-oriented programming at the assembly language level. For experimentation purposes, I would like the code weaver to operate native application level executable and dynamic link libraries. I have already done object-oriented AOP. I know assembly language for x86 and so forth. I would like to be able to do logging and other sorts of things using the familiar before/after/around constructs. I would like to be able to specify certain instructions or sequences/patterns of consecutive instructions as what to do a pointcut on since assembly/machine language is not exactly the most semantically rich computer language on the planet. If debugger and linker symbols are available, naturally, I would like to be able to use them to identify subroutines' entry points , branch/call/jump target addresses, symbolic data addresses, etc. I would like the ability to send notifications out to other diagnostic tools. Thus, support for sending data through connection-oriented sockets and datagrams is highly desirable. So is normal logging to files, UI, etc. This can be done using the action part of an aspect to make a function call, but then there are portability issues so the tool needs to support a flexible, well-abstracted logging/notifying mechanism with a clean, simple yet flexible. The goal is rapid-QA. The idea is to be able to share aspect source code braodly within communties as well as publicly. So, there needs to be a declarative security policy file that users can share. This insures that nothing untoward that is hidden directly or indirectly in an aspect source file slips by the execution manager. The policy file format needs to be simple to read, write, modify, understand, type-in, edit, and generate. Sort of like Java .policy files. Think the exact opposite of anything resembling XML Schema files and you get the idea. Is there such a tool in existence already?

    Read the article

  • How to migrate project from RCS to git? (SOLVED)

    - by Norman Ramsey
    I have a 20-year-old project that I would like to migrate from RCS to git, without losing the history. All web pages suggest that the One True Path is through CVS. But after an hour of Googling and trying different scripts, I have yet to find anything that successfully converts my RCS project tree to CVS. I'm hoping the good people at Stackoverflow will know what actually works, as opposed to what is claimed to work and doesn't. (I searched Stackoverflow using both the native SO search and a Google search, but if there's a helpful answer in the database, I missed it.) UPDATE: The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export was repaired on 14 April 2009, and this version seems to work for me. This tool converts straight to git with no intermediate CVS. Thanks Giuseppe and Jakub!!! Things that did not work that I still remember: The rcs-to-cvs script that ships in the contrib directory of the CVS sources The rcs-fast-export tool at http://git.oblomov.eu/rcs-fast-export in versions before 13 April 2010 The rcs2cvs script found in a document called "CVS-RCS- HOW-TO Document for Linux"

    Read the article

  • Comparing developer productivity tools

    - by Jacob
    I am getting ready to test developer productivity tools for our team (Coderush, Resharper, JustCode, etc). We're planning to roll the tool out the same time as we deploy Visual Studio 2010 and TFS. I've seen several posts discussing the merits of one tool versus another. However, I haven't been able to find any discussion of a methodology self evaluating. One approach I've seen is to use each tool for a month or so and decide which one you like best. This seems completely reasonable if I were evaluating it for myself, but since I'm evaluating for others as well, I'd like to do something more rigorous. I'm hoping to collect some ideas on how to do that. As a starting point, my thinking was to come up a list of 10-20 crucial features to compare, then prepare a matrix where I can compare the relative strengths of each product. Once the matrix is complete, we can make a well informed decision about which product aligns with out needs. Since I'm not a user of any product now, it's been difficult to figure out which features are critical and which are less so. Thank you!

    Read the article

  • Updating permissions on Amazon S3 files that were uploaded via JungleDisk

    - by Simon_Weaver
    I am starting to use Jungle Disk to upload files to an Amazon S3 bucket which corresponds to a Cloudfront distribution. i.e. I can access it via an http:// URL and I am using Amazon as a CDN. The problem I am facing is that Jungle Disk doesn't set 'read' permissions on the files so when I go to the corresponding URL in a browser I get an Amazon 'AccessDenied' error. If I use a tool like BucketExplorer to set the ACL then that URL now returns a 200. I really really like the simplicity of dragging files to a network drive. JungleDisk is the best program I've found to do this reliably without tripping over itself and getting confused. However it doesn't seem to have an option to make the files read-able. I really don't want to have to go to a different tool (especially if i have to buy it) to just change the permissions - and this seems really slow anyway because they generally seem to traverse the whole directory structure. JungleDisk provides some kind of 'web access' - but this is a paid feature and I'm not sure if it will work or not. S3 doesn't appear to propagate permissions down which is a real pain. I'm considering writing a manual tool to traverse my tree and set everything to 'read' but I'd rather not do this if this is a problem someone else has already solved.

    Read the article

  • Emacs key binding fallback

    - by rejeep
    Hey, I have a minor mode. If that mode is active and the user hits DEL, I want to do some action, but only if some condition holds. If the condition holds and the action is executed I want to do nothing more after that. But if the condition fails, I don't want to do anything and let the default DEL action execute. Not sure how I could solve this. But I guess I could do it in two ways: 1) I could rebind the DEL key to a function in the minor mode and then check if the conditions holds ot not. But then how do I know what the default command to DEL is? 2) I could add a pre command hook like this. Execute the command and then break the chain. But how do I break the chain? (add-hook 'pre-command-hook (lambda() (when (equal last-input-event 'backspace) ;; Do something and then stop (do not execute the ;; command that backspace is bound to) ))) In what way would you solve it? Thanks!

    Read the article

  • Generating a static website from a set of content data (possibly with webgen, webby or a similar too

    - by Darel
    My company (an engineering firm) is looking to redesign their website with some dynamic content. We have a nice portfolio of projects that we'd like to present on our site by category. To elaborate, I'd like to have a "Projects Category" menu, where you can choose a sub-project category (such as churches, schools, etc) which links to a page with images of all projects which have been tagged with that category attribute. Clicking on an image would then take you to a detailed page for that project. I have done a good bit of asp and jsp page development, but I've always worked on the front end in an enterprise environment - I've never built a production site from the back end. The advice I've gotten so far is that a full-blown CMS solution would be somewhat overkill, as we won't have a large hit count, and we'll be displaying a few hundred projects at most. One big-picture choice I appear to have - whether to dynamically generate the pages (with asp or jsp) or to use a tool to generate a set of static html pages. The tool would build the menus, project summary pages, and individual project pages based on a set of data I could provide (in the form of a database or text file.) I'm leaning towards trying to use a tool like webgen or webby to statically generate the site due to our current web hosting situation. Any thoughts on which approach is more appropriate? Is webgen or webby capable of doing what I am trying to do? Or can anyone recommend other web authoring tools better equipped to accomplish this? Thanks for any feedback!

    Read the article

  • What is usefulness of W3C's "Semantic Data Extractor" in semantically correct XHTML CSS Development?

    - by metal-gear-solid
    What is the usefulness of W3C's Semantic Data Extractor? http://www.w3.org/2003/12/semantic-extractor.html This tool, geared by an XSLT stylesheet, tries to extract some information from a HTML semantic rich document. It only uses information available through a good usage of the semantics defined in HTML. The aim is to show that providing a semantically rich HTML gives much more value to your code: using a semantically rich HTML code allows a better use of CSS, makes your HTML intelligible to a wider range of user agents (especially search engines bots). As an aside, it can give clues to user agents developers on some hooks that could be interesting to add in their product. After checking validation for CSS and HTML. Should i go for Semantic Data Extractor tool. What it does. and how it can improved our coding.? Is anyone using it? And i check some site randomly with but with most of sites it gives error Using org.apache.xerces.parsers.SAXParser Exception net.sf.saxon.trans.XPathException: org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". org.xml.sax.SAXParseException: The element type "input" must be terminated by the matching end-tag "`</input>`". Is it possible to get pass every site with this tool? on one site i got this error No top-level heading (h1) found, no outline extracted. Is it necessary to have at least a H1 in any webpage?

    Read the article

  • Sensible unit test possible?

    - by nkr1pt
    Could a sensible unit test be written for this code which extracts a rar archive by delegating it to a capable tool on the host system if one exists? I can write a test case based on the fact that my machine runs linux and the unrar tool is installed, but if another developer who runs windows would check out the code the test would fail, although there would be nothing wrong with the extractor code. I need to find a way to write a meaningful test which is not binded to the system and unrar tool installed. How would you tackle this? public class Extractor { private EventBus eventBus; private ExtractCommand[] linuxExtractCommands = new ExtractCommand[]{new LinuxUnrarCommand()}; private ExtractCommand[] windowsExtractCommands = new ExtractCommand[]{}; private ExtractCommand[] macExtractCommands = new ExtractCommand[]{}; @Inject public Extractor(EventBus eventBus) { this.eventBus = eventBus; } public boolean extract(DownloadCandidate downloadCandidate) { for (ExtractCommand command : getSystemSpecificExtractCommands()) { if (command.extract(downloadCandidate)) { eventBus.fireEvent(this, new ExtractCompletedEvent()); return true; } } eventBus.fireEvent(this, new ExtractFailedEvent()); return false; } private ExtractCommand[] getSystemSpecificExtractCommands() { String os = System.getProperty("os.name"); if (Pattern.compile("linux", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return linuxExtractCommands; } else if (Pattern.compile("windows", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return windowsExtractCommands; } else if (Pattern.compile("mac os x", Pattern.CASE_INSENSITIVE).matcher(os).find()) { return macExtractCommands; } return null; } }

    Read the article

  • C# TcpClient, getting back the entire response from a telnet device

    - by Dan Bailiff
    I'm writing a configuration tool for a device that can communicate via telnet. The tool sends a command via TcpClient.GetStream().Write(...), and then checks for the device response via TcpClient.GetStream().ReadByte(). This works fine in unit tests or when I'm stepping through code slowly. If I run the config tool such that it performs multiple commands consecutively, then the behavior of the read is inconsistent. By inconsistent I mean sometimes the data is missing, incomplete or partially garbled. So even though the device performed the operation and responded, my code to check for the response was unreliable. I "fixed" this problem by adding a Thread.Sleep to make sure the read method waits long enough for the device to complete its job and send back the whole response (in this case 1.5 seconds was a safe amount). I feel like this is wrong, blocking everything for a fixed time, and I wonder if there is a better way to get the read method to wait only long enough to get the whole response from the device. private string Read() { if (!tcpClient.Connected) throw (new Exception("Read() failed: Telnet connection not available.")); StringBuilder sb = new StringBuilder(); do { ParseTelnet(ref sb); System.Threading.Thread.Sleep(1500); } while (tcpClient.Available > 0); return sb.ToString(); } private void ParseTelnet(ref StringBuilder sb) { while (tcpClient.Available > 0) { int input = tcpClient.GetStream().ReadByte(); switch (input) { // parse the input // ... do other things in special cases default: sb.Append((char)input); break; } } }

    Read the article

  • Git as mercurial client? Why no git-hg?

    - by aapeli
    This is a question that's been bothering me for a while. I've done my homework and checked stackoverflow and found at least these two topics about my question: Git for Mercurial like git-svn and Git interoperability with a Mercurial repository I've done some serious googling to solve this issue, but so far with no luck. I've also read the Git Internals book, and the Mercurial Definitive Behind the Scenes to try to figure this out. I'm still a bit puzzled why I haven't been able to find any suitable git-hg type of a tool. From my perspective hg-svn is one of the main features, why I've chosen to use git over mercurial also at work. It allows me to use a workflow I like, and nobody else needs to bother, if they don't care. I just don't see the point in using the intermediate hg repo to convert back and forth, as suggested in one of the chains. So anyway, from what I've read hg and git seem very similar in conceptual design. There are differences under the hood, but none of those should prevent creating a git client for hg. As it seems to me, remote tracking branches and octopus merges make git even more powerful than hg is. So, the real question, is there any real reason why git-hg does not exist (or at least is very hard to find)? Is there some animosity from git users (and developers) towards their hg counterparts that has caused the lack of the git-hg tool? Do any of you have any plans to develop something like this, and go public with it? I could volunteer (although with very feeble C-skills) to participate to get this done. I just don't possess the full knowledge to start this up myself. Could this be the tool to end all DVCS wars for good?

    Read the article

  • Compiling Objective-C project on Linux (Ubuntu)

    - by Alex
    How to make an Objective-C project work on Ubuntu? My files are: Fraction.h #import <Foundation/NSObject.h> @interface Fraction: NSObject { int numerator; int denominator; } -(void) print; -(void) setNumerator: (int) n; -(void) setDenominator: (int) d; -(int) numerator; -(int) denominator; @end Fraction.m #import "Fraction.h" #import <stdio.h> @implementation Fraction -(void) print { printf( "%i/%i", numerator, denominator ); } -(void) setNumerator: (int) n { numerator = n; } -(void) setDenominator: (int) d { denominator = d; } -(int) denominator { return denominator; } -(int) numerator { return numerator; } @end main.m #import <stdio.h> #import "Fraction.h" int main( int argc, const char *argv[] ) { // create a new instance Fraction *frac = [[Fraction alloc] init]; // set the values [frac setNumerator: 1]; [frac setDenominator: 3]; // print it printf( "The fraction is: " ); [frac print]; printf( "\n" ); // free memory [frac release]; return 0; } I've tried two approaches to compile it: Pure gcc: $ sudo apt-get install gobjc gnustep gnustep-devel $ gcc `gnustep-config --objc-flags` -o main main.m -lobjc -lgnustep-base /tmp/ccIQKhfH.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' I created a GNUmakefile Makefile: include ${GNUSTEP_MAKEFILES}/common.make TOOL_NAME = main main_OBJC_FILES = main.m include ${GNUSTEP_MAKEFILES}/tool.make ... and ran: $ source /usr/share/GNUstep/Makefiles/GNUstep.sh $ make Making all for tool main... Linking tool main ... ./obj/main.o:(.data.rel+0x0): undefined reference to `__objc_class_name_Fraction' So in both cases compiler gets stuck at undefined reference to `__objc_class_name_Fraction' Do you have and idea how to resolve this issue?

    Read the article

  • How can I get the user object from a service in Symfony2

    - by pogo
    My site is using a third party service for authentication as well as other bits of functionality so I have setup a service that does all the API calls for me and it's this service that I need to be able to access the user object. I've tried injecting the security.context service into my API service but I get a ServiceCircularReferenceException because my user authentication provider references the API service (it has to in order to authenticate the user). So I get a chain of security.context -> authentication provider -> user provider -> API service -> security.context I'm struggling to this of another way of getting the user object and I can't see any obvious way of splitting up this chain. My configs are all defined in config.yml, here are the relevant bits myapp.database: class: Pogo\MyAppBundle\Service\DatabaseService arguments: siteid: %siteid% entityManager: "@doctrine.orm.entity_manager" myapp.apiservice: class: Pogo\MyAppBundle\Service\TicketingService arguments: entityManager: "@myapp.database" myapp.user_provider: class: Pogo\MyAppBundle\Service\APIUserProvider arguments: entityManager: "@myapp.database" ticketingAdapter: "@myapp.apiservice" securityContext: "@security.context" myapp.authenticationprovider: class: Pogo\MyAppBundle\Service\APIAuthenticationProvider arguments: userChecker: "@security.user_checker" encoderFactory: "@security.encoder_factory" userProvider: "@myapp.user_provider" myapp.user_provider is the service that I've defined as my user provider service in security.yml which I presume is how it's being referenced by security.context

    Read the article

  • Easily (as in WYSIWYG) customize the docbook output

    - by Sukima
    I've used DocBook in the past and I love the idea behind the separation of content from presentation. I am very comfortable editing XML directly. In my extensive search to find the best documenting solution for my needs I am always coming back to this one solution: DocBook - Build system (ant, make, etc.) - Output I have seen lots of information concerning the best WYSIWYG, XML, Text editors for writing DocBook including alternative markup languages like asciidoc. All these solutions focus on the creation of DocBook or the nightmare of the DocBook tool chain. No one ever addresses the Output side other then to say "Just use XSL" or "Custom scripts" When tasked to make a document or manual I don't want to worry about spending countless hours attempting to reprogram, customize, and modify the XSL, CSS, and shell scripts (i.e. O'Riely books). That is a very arduous task. My query: is there a tool that makes the customizing easier? And is there anything that could be similar to say Pages or Word in that the user creates a template and the tool chain does the rest? Attempting to do a visual task like pretty logos and fixing all the broken layouts that the default XSL comes up with (pagination is a mess) is very difficult from a text editor. Content is easy. Editing DocBook XSL was truly a nightmare when I did it in the past. I've searched and I find lots of info on XML editors but nothing on XSL editors. Or am I lacking a key understanding of the process. Thanks.

    Read the article

  • CEIL is one too high for exact integer divisions

    - by Synetech
    This morning I lost a bunch of files, but because the volume they were one was both internally and externally defragmented, all of the information necessary for a 100% recovery is available; I just need to fill in the FAT where required. I wrote a program to do this and tested it on a copy of the FAT that I dumped to a file and it works perfectly except that for a few of the files (17 out of 526), the FAT chain is one single cluster too long, and thus cross-linked with the next file. Fortunately I know exactly what the problem is. I used ceil in my EOF calculation because even a single byte over will require a whole extra cluster: //Cluster is the starting cluster of the file //Size is the size (in bytes) of the file //BPC is the number of bytes per cluster //NumClust is the number of clusters in the file //EOF is the last cluster of the file’s FAT chain DWORD NumClust = ceil( (float)(Size / BPC) ) DWORD EOF = Cluster + NumClust; This algorithm works fine for everything except files whose size happens to be exactly a multiple of the cluster size, in which case they end up being one cluster too much. I thought about it for a while but am at a loss as to a way to do this. It seems like it should be simple but somehow it is surprisingly tricky. What formula would work for files of any size?

    Read the article

  • Tools to document/visualize call graph?

    - by Dave Griffiths
    Hi, having recently joined a project with a vast amount of code to get to grips with, I would like to start documenting and visualizing some of the flows through the call graph to give me a better understanding of how everything fits together. This is what I would like to see in my ideal tool: every node is a function/method nodes are connected if one function can call another optional square box in between detailing conditions under which call is made (or a label icon you can hover over like a tooltip) also icon on edge describing parameters hover over node and description is displayed optional icons for node to display pseudo code scenario/domain view - display subset of complete diagram for particular use-case slide view mode - for each frame, the currently executing function is highlighted plenty of options over what to display to reduce on-screen clutter the interactive use of such a tool is key, I'm not looking for a Graphviz type solution because there would be too much clutter. The ability to form a view of a subset of the entire graph would be very handy (maybe with the unimportant clutter greyed out). Don't need automatic generation from source code, happy to enter it manually. Almost like a mind-map. Does that make sense? If you are not aware of such a tool, do you also think it would be useful? (Just in case I decide to go and scratch that itch one day!)

    Read the article

  • SoapClient throws Wrong version

    - by sivansethu
    When i sending below request, i am getting 'Wrong Version" exception. <OTA_HotelGetMsgRQ xmlns="http://www.opentravel.org/OTA/2003/05" TimeStamp="2001-12-17T09:30:47.0Z" Version="4" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" > <Messages> <Message HotelCode="123" HotelName="Test Hotel" ChainCode="321" ReasonForRequest="Reservation Retrieval" RequestCode="Optional" ChainName="Test Chain" MessageType="All" StartSeqNmbr="1" EndSeqNmbr="10" /> </Messages> </OTA_HotelGetMsgRQ> above request is converted into zend code $client = new zend_soap_client(null, array( 'location' => 'http://url...', 'Uri' =>"http://www.opentravel.org/OTA/2003/05" ) ); $request = array( array('Messages'=> array ('Message' => array ( 'HotelCode' => '123', 'HotelName' => 'Test Hotel', 'ChainCode' => '321', 'ReasonForRequest' => 'Reservation Retrieval', 'RequestCode' => 'Optional', 'ChainName' => 'Test Chain', 'MessageType' => 'All', 'StartSeqNmbr' => '1', 'EndSeqNmbr' => '10' ) ) ) ); $result = $client->OTA_HotelGetMsgRQ ($request); Above line throws exception 'Wrong Version'. Anyone help me how to solve this

    Read the article

  • Ruby: how does constant-lookup work in instance_eval/class_eval?

    - by Alan O'Donnell
    I'm working my way through Pickaxe 1.9, and I'm a bit confused by constant-lookup in instance/class_eval blocks. I'm using 1.9.2. It seems that Ruby handles constant-lookup in *_eval blocks the same way it does method-lookup: look for a definition in receiver.singleton_class (plus mixins); then in receiver.singleton_class.superclass (plus mixins); then continue up the eigenchain until you get to #<Class:BasicObject>; whose superclass is Class; and then up the rest of the ancestor chain (including Object, which stores all the constants you define at the top-level), checking for mixins along the way Is this correct? The Pickaxe discussion is a bit terse. Some examples: class Foo CONST = 'Foo::CONST' class << self CONST = 'EigenFoo::CONST' end end Foo.instance_eval { CONST } # => 'EigenFoo::CONST' Foo.class_eval { CONST } # => 'EigenFoo::CONST', not 'Foo::CONST'! Foo.new.instance_eval { CONST } # => 'Foo::CONST' In the class_eval example, Foo-the-class isn't a stop along Foo-the-object's ancestor chain! And an example with mixins: module M CONST = "M::CONST" end module N CONST = "N::CONST" end class A include M extend N end A.instance_eval { CONST } # => "N::CONST", because N is mixed into A's eigenclass A.class_eval { CONST } # => "N::CONST", ditto A.new.instance_eval { CONST } # => "M::CONST", because A.new.class, A, mixes in M

    Read the article

  • What are the differences between mapping,binding and parsing?

    - by sfrj
    I am starting to learn web-services in java EE6. I did web development before, but never nothing related to web services. All is new to me and the books and the tutorials i find in the web are to technical. I started learning about .xsd schemas and also .xml. In that topic i feel confident, i understand what are the schemas used for and what validation means. Now my next step is learning about JAX-B(Java Api for XML Binding). I rode some about it and i did also some practice in my IDE. But i have lots of basic doubts, that make me stuck and cannot feel confident to continue to the next topic. Ill appreciate a lot if someone could explain me well my doubts: What does it mean mapping and what is a mapping tool? What does it mean binding and what is a binding tool? What does it mean parsing and what is a parsing tool? How is JAX-B related to mapping,binding and parsing? I am seeking for a good answer built by you, not just a copy paste from google(Ive already been online a few hours and only got confused).

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >