Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 741/785 | < Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >

  • Installing Monodevelop from the SVN on Ubuntu 10.04

    - by celil
    I wrote the following script to install the svn version of MonoDevelop #!/usr/bin/env bash PREFIX=/opt/local check_errs() { if [[ $? -ne 0 ]]; then echo "${1}" exit 1 fi } download() { if [ ! -d ${1} ] then svn co http://anonsvn.mono-project.com/source/trunk/${1} else (cd ${1}; svn update) fi } download mono download mcs download libgdiplus ( cd mono ./autogen.sh --prefix=$PREFIX make make install check_errs ) ( cd libgdiplus ./autogen.sh --prefix=$PREFIX make make install check_errs ) download monodevelop export PKG_CONFIG_PATH=${PREFIX}/lib/pkgconfig ( cd monodevelop ./configure --prefix=$PREFIX --select check_errs make check_errs ) Everything works fine until the last make step for the monodevelop pacakge, where the script exits with the error: ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(320,82): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(325,49): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(345,115): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `SyncMethod' and no extension method `SyncMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) ./MonoDevelop.WebReferences/MoonlightChannelBaseExtension.cs(365,82): error CS1061: Type `System.ServiceModel.Description.OperationContractGenerationContext' does not contain a definition for `BeginMethod' and no extension method `BeginMethod' of type `System.ServiceModel.Description.OperationContractGenerationContext' could be found (are you missing a using directive or an assembly reference?) Compilation failed: 4 error(s), 1 warnings make[4]: *** [../../../build/AddIns/MonoDevelop.WebReferences/MonoDevelop.WebReferences.dll] Error 1 make[4]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src/addins/MonoDevelop.WebReferences' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src/addins' make[2]: *** [all-recursive] Error 1 make[2]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/drufat/Desktop/Checkout/mono/monodevelop/main' make: *** [all-recursive] Error 1 Any ideas on how to fix this? I suppose the build gets mixed up with the default installation of mono in Ubuntu, and is looking for a symbol that is not present there. My build configuration looks as follows: 1. [X] main 2. [ ] extras/JavaBinding 3. [ ] extras/BooBinding 4. [X] extras/ValaBinding 5. [ ] extras/AspNetEdit 6. [ ] extras/GeckoWebBrowser 7. [ ] extras/WebKitWebBrowser 8. [ ] extras/MonoDevelop.Database 9. [ ] extras/MonoDevelop.Profiling 10. [ ] extras/MonoDevelop.AddinAuthoring 11. [ ] extras/MonoDevelop.CodeAnalysis 12. [ ] extras/MonoDevelop.Debugger.Mdb 13. [ ] extras/MonoDevelop.Debugger.Gdb 14. [ ] extras/PyBinding 15. [ ] extras/MonoDevelop.IPhone 16. [ ] extras/MonoDevelop.MeeGo

    Read the article

  • tapestry 4 session expired

    - by cometta
    is below caused by user session expired? if yes, how to exend session on tapestry 4 ? or any other way to solve this problem? Unable to process client request: Unable to forward to local resource '/app?service=page&page=Home&id=692': java.lang.NullPointerException: Property 'webRequest' of <OuterProxy for tapestry.globals.RequestGlobals(org.apache.tapestry.services.RequestGlobals)> is null. Apr 22, 2010 5:14:43 PM org.apache.catalina.core.ApplicationContext log SEVERE: app: ServletException javax.servlet.ServletException: java.lang.NullPointerException: Property 'webRequest' of <OuterProxy for tapestry.globals.RequestGlobals(org.apache.tapestry.services.RequestGlobals)> is null. at org.apache.tapestry.services.impl.WebRequestServicerPipelineBridge.service(WebRequestServicerPipelineBridge.java:65) at $ServletRequestServicer_128043b52ea.service($ServletRequestServicer_128043b52ea.java) at org.apache.tapestry.request.DecodedRequestInjector.service(DecodedRequestInjector.java:55) at $ServletRequestServicerFilter_128043b52e6.service($ServletRequestServicerFilter_128043b52e6.java) at $ServletRequestServicer_128043b52ec.service($ServletRequestServicer_128043b52ec.java) at org.apache.tapestry.multipart.MultipartDecoderFilter.service(MultipartDecoderFilter.java:52) at $ServletRequestServicerFilter_128043b52e4.service($ServletRequestServicerFilter_128043b52e4.java) at $ServletRequestServicer_128043b52ec.service($ServletRequestServicer_128043b52ec.java) at org.apache.tapestry.services.impl.SetupRequestEncoding.service(SetupRequestEncoding.java:53) at $ServletRequestServicerFilter_128043b52e8.service($ServletRequestServicerFilter_128043b52e8.java) at $ServletRequestServicer_128043b52ec.service($ServletRequestServicer_128043b52ec.java) at $ServletRequestServicer_128043b52de.service($ServletRequestServicer_128043b52de.java) at org.apache.tapestry.ApplicationServlet.doService(ApplicationServlet.java:126) at org.apache.tapestry.ApplicationServlet.doPost(ApplicationServlet.java:171) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:378) at org.springframework.security.intercept.web.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:109) at org.springframework.security.intercept.web.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:83) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.ui.SessionFixationProtectionFilter.doFilterHttp(SessionFixationProtectionFilter.java:67) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.ui.ntlm.NtlmProcessingFilter.doFilterHttp(NtlmProcessingFilter.java:358) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.ui.ExceptionTranslationFilter.doFilterHttp(ExceptionTranslationFilter.java:101) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.context.HttpSessionContextIntegrationFilter.doFilterHttp(HttpSessionContextIntegrationFilter.java:235) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.concurrent.ConcurrentSessionFilter.doFilterHttp(ConcurrentSessionFilter.java:99) at org.springframework.security.ui.SpringSecurityFilter.doFilter(SpringSecurityFilter.java:53) at org.springframework.security.util.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:390) at org.springframework.security.util.FilterChainProxy.doFilter(FilterChainProxy.java:175) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:236) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:167) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:845) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:447) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Mongodb performance on Windows

    - by Chris
    I've been researching nosql options available for .NET lately and MongoDB is emerging as a clear winner in terms of availability and support, so tonight I decided to give it a go. I downloaded version 1.2.4 (Windows x64 binary) from the mongodb site and ran it with the following options: C:\mongodb\bin>mkdir data C:\mongodb\bin>mongod -dbpath ./data --cpu --quiet I then loaded up the latest mongodb-csharp driver from http://github.com/samus/mongodb-csharp and immediately ran the benchmark program. Having heard about how "amazingly fast" MongoDB is, I was rather shocked at the poor benchmark performance. Starting Tests encode (small).........................................320000 00:00:00.0156250 encode (medium)........................................80000 00:00:00.0625000 encode (large).........................................1818 00:00:02.7500000 decode (small).........................................320000 00:00:00.0156250 decode (medium)........................................160000 00:00:00.0312500 decode (large).........................................2370 00:00:02.1093750 insert (small, no index)...............................2176 00:00:02.2968750 insert (medium, no index)..............................2269 00:00:02.2031250 insert (large, no index)...............................778 00:00:06.4218750 insert (small, indexed)................................2051 00:00:02.4375000 insert (medium, indexed)...............................2133 00:00:02.3437500 insert (large, indexed)................................835 00:00:05.9843750 batch insert (small, no index).........................53333 00:00:00.0937500 batch insert (medium, no index)........................26666 00:00:00.1875000 batch insert (large, no index).........................1114 00:00:04.4843750 find_one (small, no index).............................350 00:00:14.2812500 find_one (medium, no index)............................204 00:00:24.4687500 find_one (large, no index).............................135 00:00:37.0156250 find_one (small, indexed)..............................352 00:00:14.1718750 find_one (medium, indexed).............................184 00:00:27.0937500 find_one (large, indexed)..............................128 00:00:38.9062500 find (small, no index).................................516 00:00:09.6718750 find (medium, no index)................................316 00:00:15.7812500 find (large, no index).................................216 00:00:23.0468750 find (small, indexed)..................................532 00:00:09.3906250 find (medium, indexed).................................346 00:00:14.4375000 find (large, indexed)..................................212 00:00:23.5468750 find range (small, indexed)............................440 00:00:11.3593750 find range (medium, indexed)...........................294 00:00:16.9531250 find range (large, indexed)............................199 00:00:25.0625000 Press any key to continue... For starters, I can get better non-batch insert performance from SQL Server Express. What really struck me, however, was the slow performance of the find_nnnn queries. Why is retrieving data from MongoDB so slow? What am I missing? Edit: This was all on the local machine, no network latency or anything. MongoDB's CPU usage ran at about 75% the entire time the test was running. Edit 2: Also, I ran a trace on the benchmark program and confirmed that 50% of the CPU time spent was waiting for MongoDB to return data, so it's not a performance issue with the C# driver.

    Read the article

  • Problems with office automation in asp.net. I can use alternatives such as open-office, if I knew ho

    - by Vinicius Melquiades
    I have a ASP.NET 2.0 web application that should upload a ppt file and then extract its slides to images. For that I have imported office.dll and Microsoft.Office.Interop.PowerPoint.dll assemblies and wrote the following code public static int ExtractImages(string ppt, string targetPath, int width, int height) { var pptApplication = new ApplicationClass(); var pptPresentation = pptApplication.Presentations.Open(ppt, MsoTriState.msoTrue, MsoTriState.msoFalse, MsoTriState.msoFalse); var slides = new List<string>(); for (var i = 1; i <= pptPresentation.Slides.Count; i++) { var target = string.Format(targetPath, i); pptPresentation.Slides[i].Export(target, "jpg", width, height); slides.Add(new FileInfo(target).Name); } pptPresentation.Close(); return slides.Count; } If I run this code in my local machine, in asp.net or a executable, it runs perfectly. But If I try running it in the production server, I get the following error: System.Runtime.InteropServices.COMException (0x80004005): PowerPoint could not open the file. at Microsoft.Office.Interop.PowerPoint.Presentations.Open(String FileName, MsoTriState ReadOnly, MsoTriState Untitled, MsoTriState WithWindow) at PPTImageExtractor.PptConversor.ExtractImages(String caminhoPpt, String caminhoDestino, Int32 largura, Int32 altura, String caminhoThumbs, Int32 larguraThumb, Int32 alturaThumb, Boolean geraXml) at Upload.ProcessRequest(HttpContext context) The process is running with the user NT AUTHORITY\NETWORK SERVICE. IIS is configured to use anonymous authentication. The anonymous user is an administrator, I set it like this to allow the application to run without having to worry about permissions. In my development machine I have office 2010 beta1. I have tested with the executable in a pc with office 2007 as well. And if I run the code from the executable in the server, with office 2003 installed, it runs perfectly. To ensure that there wouldn't be any problems with permissions, everyone in the server has full access to the web site. The website is running in IIS7 and Classic Mode. I also heard that Open-office has an API that should be able to do this, but I couldn't find anything about it. I don't mind using DLLImport to do what I have to do and I can install open-office on the web server. Don't worry about rewriting this method, as long as the parameters are the same, everything will work. I appreciate your help. ps: Sorry for bad English.

    Read the article

  • Java RMI (Server: TCP Connection Idle/Client: Unmarshalexception (EOFException))

    - by Perry Dahl Christensen
    I'm trying to implement Sun Tutorials RMI application that calculates Pi. I'm having some serious problems and I cant find the solution eventhough I've been searching the entire web and several javaskilled people. I'm hoping you can put an end to my frustrations. The crazy thing is that I can run the application from the cmd on my desktop computer. Trying the exact same thing with the exact same code in the exact same directories on my laptop produces the following errors. The problem occures when I try to connect the client to the server. I don't believe that the error is due to my policyfile as I can run it on the desktop. It must be elsewhere. Have anyone tried the same and can you give me a hint as to where my problem is, please? POLICYFILE SERVER: grant { permission java.security.AllPermissions; permission java.net.SocketPermission"*", "connect, resolve"; }; POLICYFILE CLIENT: grant { permission java.security.AllPermissions; permission java.net.SocketPermission"*", "connect, resolve"; }; SERVERSIDE ERRORS: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\STUDENTcd\ C:start rmiregistry C:java -cp c:\java;c:\java\compute.jar -Djava.rmi.server.codebase=file:/c:/jav a/compute.jar -Djava.rmi.server.hostname=localhost -Djava.security.policy=c:/jav a/servertest.policy engine.ComputeEngine ComputeEngine bound Exception in thread "RMI TCP Connection(idle)" java.security.AccessControlExcept ion: access denied (java.net.SocketPermission 127.0.0.1:1440 accept,resolve) at java.security.AccessControlContext.checkPermission(Unknown Source) at java.security.AccessController.checkPermission(Unknown Source) at java.lang.SecurityManager.checkPermission(Unknown Source) at java.lang.SecurityManager.checkAccept(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.checkAcceptPermi ssion(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.checkAcceptPermission(Unknown Sour ce) at sun.rmi.transport.Transport$1.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Unknown Source) at sun.rmi.transport.tcp.TCPTransport.handleMessages(Unknown Source) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(Unknown Sou rce) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(Unknown Sour ce) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source ) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) CLIENTSIDE ERRORS: Microsoft Windows XP [Version 5.1.2600] (C) Copyright 1985-2001 Microsoft Corp. C:\Documents and Settings\STUDENTcd\ C:java -cp c:\java;c:\java\compute.jar -Djava.rmi.server.codebase=file:\C:\jav a\files\ -Djava.security.policy=c:/java/clienttest.policy client.ComputePi local host 45 ComputePi exception: java.rmi.UnmarshalException: Error unmarshaling return header; nested exception is: java.io.EOFException at sun.rmi.transport.StreamRemoteCall.executeCall(Unknown Source) at sun.rmi.server.UnicastRef.invoke(Unknown Source) at java.rmi.server.RemoteObjectInvocationHandler.invokeRemoteMethod(Unkn own Source) at java.rmi.server.RemoteObjectInvocationHandler.invoke(Unknown Source) at $Proxy0.executeTask(Unknown Source) at client.ComputePi.main(ComputePi.java:18) Caused by: java.io.EOFException at java.io.DataInputStream.readByte(Unknown Source) ... 6 more C: Thanks in advance Perry

    Read the article

  • Importing a large dataset into a database

    - by peaceful
    I'm a beginning programmer in the relevant areas to this question, so if possible, it'd be helpful to avoid assuming I know a lot already. I'm trying to import the OpenLibrary dataset into a local Postgres database. After it's imported, I plan to use it as a starting seed for a Ruby on Rails application that will include information on books. The OpenLibrary datasets are available here, in a modified JSON format: http://openlibrary.org/dev/docs/jsondump I only need very basic information for my application, much less than what is provided in the dumps. I'm only trying to get out book titles, author names, and relationships between books and authors. Below are two typical entries from their dataset, the first for an author, and the second for a book (they seem to have an entry for each edition of a book). The entries seem to lead off with a primary key, and then with a type, before including the actual JSON database dump. /a/OL2A /type/author {"name": "U. Venkatakrishna Rao", "personal_name": "U. Venkatakrishna Rao", "last_modified": {"type": "/type/datetime", "value": "2008-09-10 08:44:01.978456"}, "key": "/a/OL2A", "birth_date": "1904", "type": {"key": "/type/author"}, "id": 99, "revision": 3} /b/OL345M /type/edition {"publishers": ["Social Science Research Project, Dept. of Geography, University of Dacca"], "pagination": "ii, 54 p.", "title": "Land use in Fayadabad area", "lccn": ["sa 65000491"], "subject_place": ["East Pakistan", "Dacca region."], "number_of_pages": 54, "languages": [{"comment": "initial import", "code": "eng", "name": "English", "key": "/l/eng"}], "lc_classifications": ["S471.P162 E23"], "publish_date": "1963", "publish_country": "pk ", "key": "/b/OL345M", "authors": [{"birth_date": "1911", "name": "Nafis Ahmad", "key": "/a/OL302A", "personal_name": "Nafis Ahmad"}], "publish_places": ["Dacca, East Pakistan"], "by_statement": "[by] Nafis Ahmad and F. Karim Khan.", "oclc_numbers": ["4671066"], "contributions": ["Khan, Fazle Karim, joint author."], "subjects": ["Land use -- East Pakistan -- Dacca region."]} The size of the uncompressed dumps are enormous, about 2GB for the authors list, and 18GB for the book editions list. OpenLibrary does not provide any tools for this themselves, they provide a simple unoptimized Python script for reading in sample data (which unlike the actual dumps comes in pure JSON format), but they estimate if that was modified for use on their actual data it would take 2 months (!) to finish loading the data. How can I read this into the database? I assume I'll need to write a program to do this. What language and any guidance on how I should do it to finish in a reasonable amount of time? The only scripting language I have any experience with is Ruby.

    Read the article

  • Default Database Collations got messed up

    - by dominicdinada
    I am using Ubuntu 9.10 with XAMPP ( Lampp "MYSQL 5.1.45 PHPMYADMIN 3.3.1 PHP 5.3.2 ) What my problem is, is that I set up my testing env to debug my scripts locally and when I did so there arose a problem. This problem is that I used firefox's addon SQLinject ME to test for weakness' and upon doing so it caused mysql to change the default local collations; character sets dir /opt/lampp/share/mysql/charsets/ collation connection latin1_general_ci (Global value) latin1_swedish_ci collation database latin1_swedish_ci collation server latin1_swedish_ci I have searched for quite sometime in regards to a solution to this problem and have come up with searching for the db.opt file which stores this information without success. Upon not finding a solution I removed lampp with the "sudo rm -fR /opt" command and reinstall and the problem still persists. I have tried to change the collations manually and still come up with the database displaying latin1_swedish_ci as the default language. Why is this a problem?? Why is it a problem with mysql? Because the application I am testing and debugging locally is built on the CodeIgnitor with Smarty framework and since this combination of framework is built to detect the LOCALES, Rather what the database defaults are I keep getting errors saying no language file for swedish...... Of course I could get the swedish language file to work around this problem but I do not feel the need to make this work around a perminant solution as with time when I move on to projects I will run into simular problems every time that A; When importing database files, backups etc it will default to import such databases as the locale swedish. B; As time passes on I might completly forget of this error and will be back to square one. I have found this code in searches for a fix,which seems to alter the tables to a desired Collaion; $value) { mysql_query("ALTER TABLE $value COLLATE latin1_general_ci"); }} echo "The collation of your database has been successfully changed!"; ? Which is handy to switch collations in One Schema at a time however this is not a fix when a framework doesnt care that the said database is in one langugae. It tests for the Default of the entire server. Someone with any knowledge of a purge or fix to this I would greatly appricate the help. One more final note is that when I was testing I only figured to back up the applications DataBase and not the entire Schema of the install. No matter if I uninstall or reinstall the database still seems to carry these problems.

    Read the article

  • Error when compiling code with the Protege API

    - by Anto
    I am new to Protege API and I have just created on Eclipse a small application which uses an external OWL file. Also I did import all the necessary libraries. import java.util.Collection; import java.util.Iterator; import edu.stanford.smi.protege.exception.OntologyLoadException; import edu.stanford.smi.protegex.owl.ProtegeOWL; import edu.stanford.smi.protegex.owl.model.*; public class Trial { public static void main(String[] args) throws OntologyLoadException{ String uri = "C:/Documents and Settings/Anto/Desktop/travel.owl"; OWLModel owlModel = ProtegeOWL.createJenaOWLModelFromURI(uri); Collection classes = owlModel.getUserDefinedOWLNamedClasses(); for(Iterator it = classes.iterator(); it.hasNext();){ OWLNamedClass cls = (OWLNamedClass) it.next(); Collection instances = cls.getInstances(false); System.out.println("Class " + cls.getBrowserText()+ " (" + instances.size()+")"); for(Iterator jt = instances.iterator(); jt.hasNext();){ OWLIndividual individual = (OWLIndividual) jt.next(); System.out.println(" - "+ individual.getBrowserText()); } } } } When I do compile however I get the following errors: WARNING: [Local Folder Repository] The specified file must be a directory. (C:\Documents and Settings\Anto\My Documents\Eclipse Workspace\ProtegeTrial\plugins\edu.stanford.smi.protegex.owl) LocalFolderRepository.update() SEVERE: Exception caught -- java.net.URISyntaxException: Illegal character in path at index 12: C:/Documents and Settings/CiuffreA/Desktop/travel.owl at java.net.URI$Parser.fail(URI.java:2809) at java.net.URI$Parser.checkChars(URI.java:2982) at java.net.URI$Parser.parseHierarchical(URI.java:3066) at java.net.URI$Parser.parse(URI.java:3014) at java.net.URI.<init>(URI.java:578) at edu.stanford.smi.protegex.owl.jena.JenaKnowledgeBaseFactory.getFileURI(Unknown Source) at edu.stanford.smi.protegex.owl.jena.JenaKnowledgeBaseFactory.loadKnowledgeBase(Unknown Source) at edu.stanford.smi.protege.model.Project.loadDomainKB(Unknown Source) at edu.stanford.smi.protege.model.Project.createDomainKnowledgeBase(Unknown Source) at edu.stanford.smi.protegex.owl.jena.creator.OwlProjectFromUriCreator.create(Unknown Source) at edu.stanford.smi.protegex.owl.ProtegeOWL.createJenaOWLModelFromURI(Unknown Source) at Trial.main(Trial.java:14) Exception in thread "main" java.lang.NullPointerException at edu.stanford.smi.protegex.owl.jena.JenaKnowledgeBaseFactory.loadKnowledgeBase(Unknown Source) at edu.stanford.smi.protege.model.Project.loadDomainKB(Unknown Source) at edu.stanford.smi.protege.model.Project.createDomainKnowledgeBase(Unknown Source) at edu.stanford.smi.protegex.owl.jena.creator.OwlProjectFromUriCreator.create(Unknown Source) at edu.stanford.smi.protegex.owl.ProtegeOWL.createJenaOWLModelFromURI(Unknown Source) at Trial.main(Trial.java:14) Does anyone have an idea on where the problem should be?

    Read the article

  • Insert a doctype into an XML document (Java/ SAX)

    - by Thom Nichols
    Imagine you have an XML document and imagine you have the DTD but the document itself doesn't actually specify a DOCTYPE ... How would you insert the DOCTYPE declaration, preferably by specifying it on the parser (similar to how you can set the schema for a document that will be parsed) or by inserting the necessary SAX events via an XMLFilter or the like? I've found many references to EntityResolver, but that is what's invoked once a DOCTYPE is found during parsing and it's used to point to a local DTD file. EntityResolver2 appears to have what I'm looking for but I haven't found any examples of usage. This is the closest I've come thus far: (code is Groovy, but close enough that you should be able to understand it...) import org.xml.sax.* import org.xml.sax.ext.* import org.xml.sax.helpers.* class XmlFilter extends XMLFilterImpl { public XmlFilter( XMLReader reader ) { super(reader) } @Override public void startDocument() { super.startDocument() super.resolveEntity( null, 'file:///./entity.dtd') println "filter startDocument" } } class MyHandler extends DefaultHandler2 { public InputSource resolveEntity(String name, String publicId, String baseURI, String systemId) { println "entity: $name, $publicId, $baseURI, $systemId" return new InputSource(new StringReader('<!ENTITY asdf "&#161;">')) } } def handler = new MyHandler() def parser = XMLReaderFactory.createXMLReader() parser.setFeature 'http://xml.org/sax/features/use-entity-resolver2', true def filter = new XmlFilter( parser ) filter.setContentHandler( handler ) filter.setEntityResolver( handler ) filter.parse( new InputSource(new StringReader('''<?xml version="1.0" ?> <test>one &asdf; two! &nbsp; &iexcl;&pound;&cent;</test>''')) ); I see resolveEntity called but still hit org.xml.sax.SAXParseException: The entity "asdf" was referenced, but not declared. at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1231) at org.xml.sax.helpers.XMLFilterImpl.parse(XMLFilterImpl.java:333) I guess this is because there's no way to add SAX events that the parser knows about, I can only add events via a filter that's upstream from the parser which are passed along to the ContentHandler. So the document has to be valid going into the XMLReader. Any way around this? I know I can modify the raw stream to add a doctype or possibly do a transform to set a DTD... Any other options?

    Read the article

  • activeX component in axapta

    - by Nico
    hi folks, i'm struggling with an .net activeX i try to use in ms axapta 2009. using this component on my local machine where it was compiled, it's working quite fine. it can be added as activeX element on a form, the methods and events are listed in the axapta-activeX-explorer and i can interact with it without any problems. but trying to distribute the dll to other clients isn't working as intended. the registration of the dll via regasm /codebase /tlb works properly - getting the message, registration was successful. the component is also listed when selecting an activeX-element to add in ax, but neither functions nor properties are listed. and launching the form results in an errormessage - activeX component CLSID ... not found on system, not installed. the classID is indeed the one, defined in .net. strange things happen, having a look on the task-manager. the activeX-component itself is just a wrapper to interact with a com-application. when launching the ax-form with the not working and _not_installed_!! activeX-thing, the taskmanager shows a new process of the com-application, which is instanciated by the activeX :/ things i tried: using different versions of regasm, eg \Windows\Microsoft.NET\Framework\v2.0.50727 ; C:\Windows\Microsoft.NET\Framework64\v2.0.50727 using new GUIDs in .net, prior removing the old ones from the registry compiling, using different versions of the .net framework doing registration via regasm, regasm /codebase, regasm /codebase /tlb, using a visual-studio-setup running registration via command-line as administrator running setup as administrator running even ax as administrator on client-machine moving dll to a different folder followed by new registration ( windows/system32; ax/client/bin ) installing to GAC ( gacutil /i ) different project-options in visual studio ( COM-Visibility; register for COM-Interop; different targetPlatform ) hoped for the fact, that compiling in visual studio with register for COM-Interop option enabled does something more than just the regasm-registration, i used a registry-monitor-microsoft-tool for logging the registry-activity which happend during compilation. using these logs to create all registry-entries on the target-client in addition didn't work either. any hints or help would be so much appreciated! this thing is blocking me for days now :(

    Read the article

  • Reliable way of generating unique hardware ID

    - by mr.b
    Question: what's the best way to accomplish following. I have to come up with unique ID for each networked client, such that: it (ID) should persist once client software is installed on target computer, and should continue to persist if software is re-installed on same computer and same OS installment, it should not change if hardware configuration is modified in most ways (except changing the motherboard) When hard drive with client software installed is cloned to another computer with identical hardware configuration (or, as similar as possible), client software should be aware of that change. A little bit of explanation and some back-story: This question is basically age old question that also touches topic of software copy-protection, as some of mechanisms used in that area are mentioned here. I should be clear at this point that I'm not looking for a copy-protection scheme. Please, read on. :) I'm working on a client-server software that is supposed to work in local network. One of problems I have to solve is to identify each unique client in network (not so much of a problem), so that I can apply certain attributes to every specific client, retain and enforce those attributes during deployment lifetime of a specific client. While I was looking for a solution, I was aware of following: Windows activation system uses some kind of heavy fingerprinting mechanism, that is extremely sensitive to hardware modifications, Disk imaging software copies along all Volume IDs (tied to each partition when formatted), and custom, uniquely generated IDs during installation process, during first run, or in any other way, that is strictly software in its nature, and stored in registry or on hard drive, so it's very easy to confuse two Obvious choice for this kind of problem would be to find out BIOS identifiers (not 100% sure if this is unique through identical motherboard models, though), as that's the only thing I can rely on, that isn't duplicated, transferred by cloning, and that can't be changed (at least not by using some user-space program). Everything else fails as either being not reliable (MAC cloning, anyone?), or too demanding (in terms that it's too sensitive to configuration changes). Am I missing something obvious here? Sub-question that I'd like to ask is, am I doing it correctly, architecture-wise? Perhaps there is a better tool for task that I have to accomplish... Another approach I had in mind is something similar to handshake mechanism, where server maintains internal lookup table of connected client IDs (which can be even completely software-based and non-unique at any given moment), and tells client to come up with different ID during handshake, if duplicate ID is provided upon connection. That approach, unfortunately, doesn't play nicely with one of requirements to tie attributes to specific client during lifetime.

    Read the article

  • How to run a powershell script within a DOS batch file

    - by Don Vince
    How do I have a powershell script embedded within the same file as a DOS batch script? I know this kind of thing is possible in other scenarios: Embedding SQL in a DOS batch script using sqlcmd and a clever arrangements of goto's and comments at the beginning of the file In a *nix environment having a the name of the program you wish to run the script with on the first line of the script commented out e.g. #!/usr/local/bin/python There may not be a way to do this - in which case I will have to call the separate powershell script from the launching DOS script. One possible solution I've considered is to echo out the powershell script, and then run it. A good reason to not do this is that part of the reason to attempt this is to be using the advantages of the powershell environment without the pain of, for example, DOS escape characters I have some unusual constraints and would like to find an elegant solution. I suspect this question may be baiting responses of the variety: "why don't you try and solve this different problem instead." Suffice to say these are my constraints, sorry about that. Any ideas? Is there a suitable combination of clever comments and escape characters that will enable me to achieve this? Some thoughts on how to achieve this: A carat ^ at the end of a line in DOS is a continuation - like an underscore in VB An ampersand & in DOS typically is used to separate commands echo Hello & echo World results in 2 echos on separate lines %0 will give you the script that's currently running So something like this (if I could make it work) would be good: # & call powershell -psconsolefile %0 # & goto :EOF /* From here on in we're running nice juicy powershell code */ Write-Output "Hello World" Except... It doesn't work... because the extension of the file isn't as per powershell's liking: Windows PowerShell console file "insideout.bat" extension is not psc1. Windows PowerShell console file extension must be psc1. DOS isn't really altogether happy with the situation either - although it does stumble on '#' is not recognized as an internal or external command, operable program or batch file.

    Read the article

  • How to write a buffer-overflow exploit in GCC,windows XP,x86?

    - by Mask
    void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; int *ret; ret = buffer1 + 12; (*ret) += 8;//why is it 8?? } void main() { int x; x = 0; function(1,2,3); x = 1; printf("%d\n",x); } The above demo is from here: http://insecure.org/stf/smashstack.html But it's not working here: D:\test>gcc -Wall -Wextra hw.cpp && a.exe hw.cpp: In function `void function(int, int, int)': hw.cpp:6: warning: unused variable 'buffer2' hw.cpp: At global scope: hw.cpp:4: warning: unused parameter 'a' hw.cpp:4: warning: unused parameter 'b' hw.cpp:4: warning: unused parameter 'c' 1 And I don't understand why it's 8 though the author thinks: A little math tells us the distance is 8 bytes. My gdb dump as called: Dump of assembler code for function main: 0x004012ee <main+0>: push %ebp 0x004012ef <main+1>: mov %esp,%ebp 0x004012f1 <main+3>: sub $0x18,%esp 0x004012f4 <main+6>: and $0xfffffff0,%esp 0x004012f7 <main+9>: mov $0x0,%eax 0x004012fc <main+14>: add $0xf,%eax 0x004012ff <main+17>: add $0xf,%eax 0x00401302 <main+20>: shr $0x4,%eax 0x00401305 <main+23>: shl $0x4,%eax 0x00401308 <main+26>: mov %eax,0xfffffff8(%ebp) 0x0040130b <main+29>: mov 0xfffffff8(%ebp),%eax 0x0040130e <main+32>: call 0x401b00 <_alloca> 0x00401313 <main+37>: call 0x4017b0 <__main> 0x00401318 <main+42>: movl $0x0,0xfffffffc(%ebp) 0x0040131f <main+49>: movl $0x3,0x8(%esp) 0x00401327 <main+57>: movl $0x2,0x4(%esp) 0x0040132f <main+65>: movl $0x1,(%esp) 0x00401336 <main+72>: call 0x4012d0 <function> 0x0040133b <main+77>: movl $0x1,0xfffffffc(%ebp) 0x00401342 <main+84>: mov 0xfffffffc(%ebp),%eax 0x00401345 <main+87>: mov %eax,0x4(%esp) 0x00401349 <main+91>: movl $0x403000,(%esp) 0x00401350 <main+98>: call 0x401b60 <printf> 0x00401355 <main+103>: leave 0x00401356 <main+104>: ret 0x00401357 <main+105>: nop 0x00401358 <main+106>: add %al,(%eax) 0x0040135a <main+108>: add %al,(%eax) 0x0040135c <main+110>: add %al,(%eax) 0x0040135e <main+112>: add %al,(%eax) End of assembler dump. Dump of assembler code for function function: 0x004012d0 <function+0>: push %ebp 0x004012d1 <function+1>: mov %esp,%ebp 0x004012d3 <function+3>: sub $0x38,%esp 0x004012d6 <function+6>: lea 0xffffffe8(%ebp),%eax 0x004012d9 <function+9>: add $0xc,%eax 0x004012dc <function+12>: mov %eax,0xffffffd4(%ebp) 0x004012df <function+15>: mov 0xffffffd4(%ebp),%edx 0x004012e2 <function+18>: mov 0xffffffd4(%ebp),%eax 0x004012e5 <function+21>: movzbl (%eax),%eax 0x004012e8 <function+24>: add $0x5,%al 0x004012ea <function+26>: mov %al,(%edx) 0x004012ec <function+28>: leave 0x004012ed <function+29>: ret In my case the distance should be - = 5,right?But it seems not working.. Why function needs 56 bytes for local variables?( sub $0x38,%esp )

    Read the article

  • jQueryUI autocomplete - when no results are returned

    - by Brian M. Hunt
    I'm wondering how one can catch and add a custom handler when empty results are returned from the server when using jQueryUI autocomplete. There seem to be a few questions on this point related to the various jQuery plugins (e.g. jQuery autocomplete display “No data” error message when results empty), but I am wondering if there's a better/simpler way to achieve the same with the jQueryUI autocomplete. It seems to me this is a common use case, and I thought perhaps that jQueryUI had improved on the jQuery autocomplete by adding the ability to cleanly handle this situation. However I've not been able to find documentation of such functionality, and before I hack away at it I'd like to throw out some feelers in case others have seen this before. While probably not particularly influential, I can have the server return anything - e.g. HTTP 204: No Content to a 200/JSON empty list - whatever makes it easiest to catch the result in jQueryUI's autocomplete. My first thought is to pass a callback with two arguments, namely a request object and a response callback to handle the code, per the documentation: The third variation, the callback, provides the most flexibility, and can be used to connect any data source to Autocomplete. The callback gets two arguments: A request object, with a single property called "term", which refers to the value currently in the text input. For example, when the user entered "new yo" in a city field, the Autocomplete term will equal "new yo". A response callback, which expects a single argument to contain the data to suggest to the user. This data should be filtered based on the provided term, and can be in any of the formats described above for simple local data (String-Array or Object-Array with label/value/both properties). When the response callback receives no data, it inserts returns a special one-line object-array that has a label and an indicator that there's no data (so the select/focus recognize it as the indicator that no-data was returned). This seems overcomplicated. I'd prefer to be able to use a source: "http://...", and just have a callback somewhere indicating that no data was returned. Thank you for reading. Brian

    Read the article

  • How to read an IIS 6 Website's Directory Structure using WMI?

    - by Steve Johnson
    I need to read a website's folders using WMI and C# in IIS 6.0. I am able to read the Virtual directories and applications using the "IISWebVirtualDirSetting" class. However the physical folders located inside a website cannot be read using this class. And for my case i need to read sub folders located within a website and later on set permission on them. For my requirement i dont need to work on Virtual Directories/Web Service Applications (which can be easily obtained using the code below..). I have tried to use IISWebDirectory class but it has been useful. Here is the code that reads IIS Virtual Directories... public static ArrayList RetrieveVirtualDirList(String ServerName, String WebsiteName) { ConnectionOptions options = SetUpAuthorization(); ManagementScope scope = new ManagementScope(string.Format(@"\\{0}\root\MicrosoftIISV2", ServerName), options); scope.Connect(); String SiteId = GetSiteIDFromSiteName(ServerName, WebsiteName); ObjectQuery OQuery = new ObjectQuery(@"SELECT * FROM IISWebVirtualDirSetting"); //ObjectQuery OQuery = new ObjectQuery(@"SELECT * FROM IIsSetting"); ManagementObjectSearcher WebSiteFinder = new ManagementObjectSearcher(scope, OQuery); ArrayList WebSiteListArray = new ArrayList(); ManagementObjectCollection WebSitesCollection = WebSiteFinder.Get(); String WebSiteName = String.Empty; foreach (ManagementObject WebSite in WebSitesCollection) { WebSiteName = WebSite.Properties["Name"].Value.ToString(); WebsiteName = WebSiteName.Replace("W3SVC/", ""); String extrctedSiteId = WebsiteName.Substring(0, WebsiteName.IndexOf('/')); String temp = WebsiteName.Substring(0, WebsiteName.IndexOf('/') + 1); String VirtualDirName = WebsiteName.Substring(temp.Length); WebsiteName = WebsiteName.Replace(SiteId, ""); if (extrctedSiteId.Equals(SiteId)) //if (true) { WebSiteListArray.Add(VirtualDirName ); //WebSiteListArray.Add(WebSiteName); //+ "|" + WebSite.Properties["Path"].Value.ToString() } } return WebSiteListArray; } P.S: I need to programmatically get the sub folders of an already deployed site(s) using WMI and C# in an ASP. Net Application. I need to find out the sub folders of existing websites in a local or remote IIS 6.0 Web Server. So i require a programmatic solution. Precisely if i am pointed at the right class (like IISWebVirtualDirSetting etc ) that i may use for retrieving the list of physical folders within a website then it will be quite helpful. I am not working in Powershell and i don't really need a solution that involves powershell or vbscripts. Any alternative programmatic way of doing the same in C#/ASP.Net will also be highly appreciated.

    Read the article

  • WCF Timeout issue - should there even be a socket connection?

    - by stiank81
    I have a .Net application which is split into a client and server side. The communication between them is handled using WCF. I'm not using the automagic service references, but instead I've built the connection manually like described in the Screencast by Miguel Castro. Summarized this means that I create a console application on the server side that holds ServiceHost objects for the different services: var myServiceHost = new System.ServiceModel.ServiceHost(typeof(MyService), new Uri("net.tcp://localhost:8002")); myServiceHost.Open(); And on the client side I have service proxies creating channels using the ChannelFactory: IMyService proxy = new ChannelFactory<IMyService>("MyServiceEndpoint").CreateChannel(); The client and server side share the service contract defined in the interface IMyService. And another advantage is that I get minimal App.config files - without all the autogenerated stuff created through the Service References. Example from client side: <?xml version="1.0"?> <configuration> <system.serviceModel> <client> <endpoint address="net.tcp://localhost:8002/MyEndpoint" binding="netTcpBinding" contract="IMyService" name="MyServiceEndpoint"/> </client> </system.serviceModel> </configuration> So - to my problem. I create the proxy once, and it holds a channel all the way through the application. However, if I leave the application without use for a few minutes the channel has timed out, and I get the following exception: The socket connection was aborted. This could be caused by an error processing your message or a receive timeout being exceeded by the remote host, or an underlying network resource issue. Local socket timeout was '00:00:59.9979998'. How do I prevent this? I'm assuming I need to specify a higher timeout in my configuration? But I don't want it to ever time out. But on the other hand - I don't want a socket connection! Do I need one? Thought I could go connection less with WCF... What's the permanent solution and best practice on solving this? Set timeout to "never".. Create a new channel for each request? I'm assuming there is some overhead creating the channel?.. Increase the timeout to e.g. 5minutes and create new channel if the connection did timeout? Make it connection less somehow? (Without the overhead of creating channels..) Something else...

    Read the article

  • oci_connect Blank Page in PHP

    - by Bryan
    UPDATE (3/8/2010) I installed Xdebug and have it tracing my code. This is the output I am getting: TRACE START [2010-03-08 17:53:05] 0.2090 327864 -> {main}() /data/aims3/http/octest.php:0 0.2091 327988 -> ini_set(string(14), string(1)) /data/aims3/http/octest.php:3 0.2093 327920 -> error_reporting(long) /data/aims3/http/octest.php:4 0.2094 328048 -> oci_connect(string(8), string(8), string(25)) /data/aims3/http/octest.php:6 The trace halts at that point. I have installed everything the same way on a local server and it works fine. To say I am at a complete loss would be putting it lightly. *NOTE: I ran make test and it returned FAIL on every test. I never ran this on my working machine to see if it reports the same errors. Any idea why make test would report FAIL but make doesn't report any error? I've installed the Oracle Instantclient with no reported errors along with the OCI8 PECL package and at a loss. Whenever I try to open a connection with oci_connect, it halts my entire PHP script. EXAMPLE: <?php ini_set ("display_errors", "1"); error_reporting(E_ALL); echo "before"; $conn = oci_connect("username", "password", "host"); echo "after"; ?> Returns a complete blank page. The module is loaded (seen in phpinfo) and everything installed with no errors. I am at a complete loss. CentOS: 5.4 Apache: 2.2.3 PHP: 5.3.1 InstantClient: 11.2 oci8: 1.4.1 Any thoughts? NOTES Apache Error Log reports nothing Attempted Debugging: 1: <?php ini_set ("display_errors", "1"); error_reporting(E_ALL); echo "before"; if(!function_exists('oci_connect')) die('Oracle Not Installed'); echo "after"; ?> Returns: beforeafter 2: Changing host to //host Returns: Same error

    Read the article

  • How can we implement change notification propagation for WPF and SL in the MVVM pattern?

    - by Firoso
    Here's an example scenario targetting MVVM WPF/SL development: View data binds to view model Property "Target" "Target" exposes a field of an object called "data" that exists in the local application model, called "Original" when "Original" changes, it should raise notification to the view model and then propogate that change notification to the View. Here are the solutions I've come up with, but I don't like any of them all that much. I'm looking for other ideas, by the time we come up with something rock solid I'm certain Microsoft will have released .NET 5 with WPF/SL extensions for better tools for MVVM development. For now the question is, "What have you done to solve this problem and how has it worked out for you?" Option 1. Proposal: Attach a handler to data's PropertyChanged event that watches for string values of properties it cares about that might have changed, and raises the appropriate notification. Why I don't like it: Changes don't bubble naturally, objects must be explicitly watched, if data changes to a new source, events must be un-registered/registered. Why I kind of like it: I get explicit control over propogation of changes, and I don't have to use any types that belong at a higher level of the application such as dependancy properties. Option 2. Proposal: Attach a handler to data's PropertyChanged event that re-raises the event across all properties using the name property name. Why I don't like it: This is essentially the same as option 1, but less intelligent, and forces me to never change my property names, as they have to be the same as the property names on data Why I kind of like it: It's very easy to set up and I don't have to think about it... Then again if I try to think, and change names to things that make sense, I shoot myself in the foot, and then I have to think about it! Option 3. Proposal: Inherit my view model from dependancy object, and notify binding sources of changes directly. Why I don't like it: I'm not even 100% sure dependancy properties/objects can DO this, it was just a thought to look into. Also I don't personally feel that WPF/SL types like Dep Obj belong at the view model level. Why I kind of like it: IF it has the capability that I'm seeking then it's a good answer! minus that pesky layering issue. Option 4. Proposal: Use a consistant agent messaging system based off of Task Parallels DataFlow Library to propogate everything through linked pipelining. Why I don't like it: I've never tried this, and somehow I think it will be lacking, plus it requires me to think about my code completely differently all the way around. Why I kind of like it: It has the possiblity of allowing me to do some VERY fun manipulations with a push based data model and using ActionBlocks as validation AND setters to then privately change view model properties and explicitly control PropertyChanged notifications.

    Read the article

  • Proxy settings with ivy...

    - by user315228
    Hi, I have an issue where in I have defined dependancies in ivy.xml on our internal corporate svn. I am able to access this svn site without any proxy task in ant. While my dependencies resides on ibiblio, that’s something outside our corporate, and needs proxy inorder to download something. I am facing problem using ivy here: I have following in build.xml <target name="proxy" <property name="proxy.host" value="xyz.proxy.net"/ <property name="proxy.port" value="8443"/ <setproxy proxyhost="${proxy.host}" proxyport="${proxy.port}"/ </target <!-- resolve the dependencies of stratus --> <target name="resolveTestDependency" depends="testResolve, proxy" description="retrieve test dependencies with ivy"> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:retrieve conf="test" pattern="${jars}/[artifact]-[revision].[ext]"/><!--pattern here specifies where do you want to download lib to?--> </target> <target name=" testResolve "> <ivy:settings file="stratus-ivysettings.xml" /> <ivy:resolve conf="test" file="stratus-ivy.xml"/> </target> Following is the excerpt from stratus-ivysettings.xml <resolvers <!-- here you define your file in private machine not on the repo (e.g. jPricer.jar or edgApi.jar)-- <!-- This we will use a url nd not local file system.. -- <url name="privateFS" <ivy pattern="http://xyz.svn.com/ivyRepository/ [organisation]/ivy/ivy.xml"/ </url . . . <url name="public" m2compatible="true" <artifact pattern="http://www.ibiblio.org/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/ </url . . . So as can be seen here for getting ivy.xml, I don’t need any proxy as its within our own network which cant be accesses when I set proxy. But on the other hand I am using ibiblio as well which is external to our network and works only with proxy. So above build.xml wont work in that case. Can somebody help here. I don’t need proxy while getting ivy.xml (as if I have proxy, ivy wont be able to find ivy file behind proxy from within the network), and I just need it when my resolver goes to public url.

    Read the article

  • PHP throws 'Allowed memory exhausted' errors while migrating data in Drupal.

    - by Stan
    I'm trying to setup a tiny sandbox on a local machine to play around with Drupal. I created a few CCK types; in order to create a few nodes I wrote the following script: chdir('C:\..\drupal'); require_once '.\includes\bootstrap.inc'; drupal_bootstrap(DRUPAL_BOOTSTRAP_FULL); module_load_include('inc', 'node', 'node.pages'); $node = array('type' => 'my_type'); $link = mysql_connect(..); mysql_select_db('my_db'); $query_bldg = ' SELECT stuff FROM table LIMIT 10 '; $result = mysql_query($query_bldg); while ($row = mysql_fetch_object($result)) { $form_state = array(); $form_state['values']['name'] = 'admin'; $form_state['values']['status'] = 1; $form_state['values']['op'] = t('Save'); $form_state['values']['title'] = $row->val_a; $form_state['values']['my_field'][0]['value'] = $row->val_b; ## About another dozen or so of similar assignments... drupal_execute('node_form', $form_state, (object)$node); } Here are a few relevant lines from php_errors.log: [12-Jun-2010 05:02:47] PHP Notice: Undefined index: REMOTE_ADDR in C:\..\drupal\includes\bootstrap.inc on line 1299 [12-Jun-2010 05:02:47] PHP Notice: Undefined index: REMOTE_ADDR in C:\..\drupal\includes\bootstrap.inc on line 1299 [12-Jun-2010 05:02:47] PHP Warning: session_start(): Cannot send session cookie - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 1143 [12-Jun-2010 05:02:47] PHP Warning: session_start(): Cannot send session cache limiter - headers already sent (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 1143 [12-Jun-2010 05:02:47] PHP Warning: Cannot modify header information - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 709 [12-Jun-2010 05:02:47] PHP Warning: Cannot modify header information - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 710 [12-Jun-2010 05:02:47] PHP Warning: Cannot modify header information - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 711 [12-Jun-2010 05:02:47] PHP Warning: Cannot modify header information - headers already sent by (output started at C:\..\drupal\includes\bootstrap.inc:1299) in C:\..\drupal\includes\bootstrap.inc on line 712 [12-Jun-2010 05:02:47] PHP Notice: Undefined index: REMOTE_ADDR in C:\..\drupal\includes\bootstrap.inc on line 1299 [12-Jun-2010 05:02:48] PHP Fatal error: Allowed memory size of 239075328 bytes exhau sted (tried to allocate 261904 bytes) in C:\..\drupal\includes\form.inc on line 488 [12-Jun-2010 05:03:22] PHP Fatal error: Allowed memory size of 239075328 bytes exhausted (tried to allocate 261904 bytes) in C:\..\drupal\includes\form.inc on line 488 [12-Jun-2010 05:04:34] PHP Fatal error: Allowed memory size of 262144 bytes exhausted (tried to allocate 261904 bytes) in Unknown on line 0 At this point any action php takes results in the last error shown above. I tried increasing the value of memory_limit in php.ini before the final Fatal error which obviously didn't help. How can the error be eliminated? Am I on a correct path to migrating data into Drupal or should the cck tables be operated on directly? Windows XP PHP 5.3.2 VC6 Apache 2.2

    Read the article

  • Ruby on Rails has_one Model Not Supplying ID Column

    - by Metric Scantlings
    I have a legacy rails (version 1.2.3) app which runs without issue on a number of servers (not to mention my local environment). Deployed to its newest server, though, and I now get ActiveRecord::StatementInvalid: Mysql::Error: #23000Column 'video_id' cannot be null errors. Below are the models/relationships, simplified: class Video < ActiveRecord::Base has_one(:user, :dependent => :destroy) end class User < ActiveRecord::Base belongs_to(:video) end And below is a rails console transcript of the relationships failing: >> video = Video.create(:title => 'New Video') => #<Video:0xb6d5e31c>... >> video.id => 5 >> video.user = User.create(:name => 'Tester') ActiveRecord::StatementInvalid: Mysql::Error: #23000Column 'video_id' cannot be null: INSERT INTO users (`name`, `video_id`) VALUES('Tester', NULL) from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/connection_adapters/abstract_adapter.rb:128:in `log' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/connection_adapters/mysql_adapter.rb:243:in `execute' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/connection_adapters/mysql_adapter.rb:253:in `insert' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/base.rb:1811:in `create_without_callbacks' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/callbacks.rb:254:in `create_without_timestamps' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/timestamp.rb:39:in `create' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/base.rb:1789:in `create_or_update_without_callbacks' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/callbacks.rb:242:in `create_or_update' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/base.rb:1545:in `save_without_validation' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/validations.rb:752:in `save_without_transactions' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/transactions.rb:129:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/connection_adapters/abstract/database_statements.rb:59:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/transactions.rb:95:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/transactions.rb:121:in `transaction' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/transactions.rb:129:in `save' from /usr/lib/ruby/gems/1.8/gems/activerecord-1.15.3/lib/active_record/base.rb:451:in `create' from (irb):3 from :0 Has anyone else come across ActiveRecord not sending an ID when it clearly knows it?

    Read the article

  • Using sem_t in a Qt Project

    - by thauburger
    Hi everyone, I'm working on a simulation in Qt (C++), and would like to make use of a Semaphore wrapper class I made for the sem_t type. Although I am including semaphore.h in my wrapper class, running qmake provides the following error: 'sem_t does not name a type' I believe this is a library/linking error, since I can compile the class without problems from the command line. I've read that you can specify external libraries to include during compilation. However, I'm a) not sure how to do this in the project file, and b) not sure which library to include in order to access semaphore.h. Any help would be greatly appreciated. Thanks, Tom Here's the wrapper class for reference: Semaphore.h #ifndef SEMAPHORE_H #define SEMAPHORE_H #include <semaphore.h> class Semaphore { public: Semaphore(int initialValue = 1); int getValue(); void wait(); void post(); private: sem_t mSemaphore; }; #endif Semaphore.cpp #include "Semaphore.h" Semaphore::Semaphore(int initialValue) { sem_init(&mSemaphore, 0, initialValue); } int Semaphore::getValue() { int value; sem_getvalue(&mSemaphore, &value); return value; } void Semaphore::wait() { sem_wait(&mSemaphore); } void Semaphore::post() { sem_post(&mSemaphore); } And, the QT Project File: TARGET = RestaurantSimulation TEMPLATE = app QT += SOURCES += main.cpp \ RestaurantGUI.cpp \ RestaurantSetup.cpp \ WidgetManager.cpp \ RestaurantView.cpp \ Table.cpp \ GUIFood.cpp \ GUIItem.cpp \ GUICustomer.cpp \ GUIWaiter.cpp \ Semaphore.cpp HEADERS += RestaurantGUI.h \ RestaurantSetup.h \ WidgetManager.h \ RestaurantView.h \ Table.h \ GUIFood.h \ GUIItem.h \ GUICustomer.h \ GUIWaiter.h \ Semaphore.h FORMS += RestaurantSetup.ui LIBS += Full Compiler Output: g++ -c -pipe -g -gdwarf-2 -arch i386 -Wall -W -DQT_GUI_LIB -DQT_CORE_LIB -DQT_SHARED - I/usr/local/Qt4.6/mkspecs/macx-g++ -I. - I/Library/Frameworks/QtCore.framework/Versions/4/Headers -I/usr/include/QtCore - I/Library/Frameworks/QtGui.framework/Versions/4/Headers -I/usr/include/QtGui - I/usr/include -I. -I. -F/Library/Frameworks -o main.o main.cpp In file included from RestaurantGUI.h:10, from main.cpp:2: Semaphore.h:14: error: 'sem_t' does not name a type make: *** [main.o] Error 1 make: Leaving directory `/Users/thauburger/Desktop/RestaurantSimulation' Exited with code 2. Error while building project RestaurantSimulation When executing build step 'Make'

    Read the article

  • Project References DLL version hell

    - by Mr Shoubs
    We're having problems getting visual studio to pick up the latest version of a DLL from one of our projects. We have multiple class library projects (e.g. BusinessLogic, ReportData) and a number of web services, each has a reference to a Connectivity DLL we've written (this ref to the connectivity DLL is the problem). We always point references to the DLL in the bin/debug folder, (which is where we always build to for any given project) and all custom DLL references have CopyLocal = True and SpecificVersion = False ReportData has a reference to business logic (which also has a reference to connectivity - I don't see why this should cause a problem, but thought it is worth mentioning) The weird thing is, when you click "Add Reference" and browse to Connectivity/bin/debug - you hover the mouse over the DLL file, the correct (latest) version is shown (version and file version are always incremented together), but when you click ok, a previous version number is pulled though. Even when I look in the current projects debug folder (where copy local would put the DLL after compiling) that shows the latest version number. - NO WHERE does can I find the previous version of the DLL outside of visual studio, but in that project references it has the old version - even though the path is correct. I'm at a loss as to where it might be getting the old versions from. Or even why it wants that one. This is possibly the most frustraighting problem I have ever come across. Does anyone know how to ensure the latest version is pulled through (preferably automatically or on compile). EDIT: Although not exactly the scenario I'm dealing with I was reading this article and somewhere it mentions about CLR ignoring revision numbers. Understandable (even though this hasn't been a problem before - we're on revision 39), so I thought I would update the build number, still didn't work. In a vain attempt I though I would update the minor version number and see if that made any difference. I'm not saying this is the answer as I have to check quite a few things first, but on the face of it, this seems to have solved my problem... Further edit: In other class libraries this seems to have solved the problem, however in a test windows application it still pulls a previous version through :( If I increment the minor version number again, the same problem come back and I am left with the wrong version being pulled though. Further Edit - I created an entirly new project, added a reference and still had the exact same problem. This suggests the problem is restriced to the project I am referencing. Wish I knew why! Anyone had this problem before and know how to get around it? HELP!

    Read the article

  • Switching DataSources in ReportViewer in WinForms

    - by Mike Wills
    I have created a winform for the users to view view the many reports I am creating for them. I have a drop down list with the report name which triggers the appropriate fields to display the parameters. Once those are filled, they press Submit and the report appears. This works the first time they hit the screen. They can change the parameters and the ReportViewer works fine. Change to a different report, and the I get the following ReportViewer error: An error occurred during local report processing. An error has occurred during the report processing. A data source instance has not been supplied for the data source "CgTempData_BusMaintenance". As far as the process I use: I set reportName (string) the physical RDLC name. I set the dataSource (string) as the DataSource Name I fill a generic DataTable with the data for the report to run from. Make the ReportViewer visible Set the LocalReport.ReportPath = "Reports\\" = reportName; Clear the LocalReport.DataSources.Clear() Add the new LocalReport.DataSources.Add(new ReportDataSource(dataSource, dt)); RefreshReport() on the ReportViewer. Here is the portion of the code that setups up and displays the ReportViewer: /// <summary> /// Builds the report. /// </summary> private void BuildReport() { DataTable dt = null; ReportingCG rcg = new ReportingCG(); if (reportName == "GasUsedReport.rdlc") { dataSource = "CgTempData_FuelLog"; CgTempData.FuelLogDataTable DtFuelLog = rcg.BuildFuelUsedTable(fromDate, toDate); dt = DtFuelLog; } else if (reportName == "InventoryCost.rdlc") { CgTempData.InventoryUsedDataTable DtInventory; dataSource = "CgTempData_InventoryUsed"; DtInventory = rcg.BuildInventoryUsedTable(fromDate, toDate); dt = DtInventory; } else if (reportName == "VehicleMasterList.rdlc") { dataSource = "CgTempData_VehicleMaster"; CgTempData.VehicleMasterDataTable DtVehicleMaster = rcg.BuildVehicleMasterTable(); dt = DtVehicleMaster; } else if (reportName == "BusCosts.rdlc") { dataSource = "CgTempData_BusMaintenance"; dt = rcg.BuildBusCostsTable(fromDate, toDate); } // Setup the DataSource this.reportViewer1.Visible = true; this.reportViewer1.LocalReport.ReportPath = "Reports\\" + reportName; this.reportViewer1.LocalReport.DataSources.Clear(); this.reportViewer1.LocalReport.DataSources.Add(new ReportDataSource(dataSource, dt)); this.reportViewer1.RefreshReport(); } Any ideas how to remove all of the old remaining data? Do I dispose the object and recreate it?

    Read the article

  • SQLCMD Restore works in Management Studio but not from DOS prompt

    - by Gautam
    Any idea why my Restore command works fine when run in Management Studio 2008 but not when run from the dos prompt? Shown below is the error when running from the dos prompt. C:\>SQLCMD -s local\SQL2008 -d master -Q "RESTORE DATABASE [Sample.Db] FROM DISK = N'C:\Sample.Db.bak' WITH FILE = 1, MOVE N'Sample.Db' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf', MOVE N'Sample.Db_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf', NOUNLOAD, REPLACE, STATS = 10" Msg 3634, Level 16, State 1, Server GAUTAM, Line 1 The operating system returned the error '32(The process cannot access the file because it is being used by another process.)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf'. Msg 3156, Level 16, State 8, Server GAUTAM, Line 1 File 'Sample.Db' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf'. Use WITH MOVE to identify a valid location for the file. Msg 3634, Level 16, State 1, Server GAUTAM, Line 1 The operating system returned the error '32(The process cannot access the file because it is being used by another process.)' while attempting 'RestoreContainer::ValidateTargetForCreation' on 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf'. Msg 3156, Level 16, State 8, Server GAUTAM, Line 1 File 'Sample.Db_log' cannot be restored to 'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf'. Use WITH MOVE to identify a valid location for the file. Msg 3119, Level 16, State 1, Server GAUTAM, Line 1 Problems were identified while planning for the RESTORE statement. Previous messages provide details. Msg 3013, Level 16, State 1, Server GAUTAM, Line 1 RESTORE DATABASE is terminating abnormally. However if I execute this directly in Management Studio 2008, it works fine: RESTORE DATABASE [Sample.Db] FROM DISK = N'C:\Sample.Db.bak' WITH FILE = 1, MOVE N'Sample.Db' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db.mdf', MOVE N'Sample.Db_log' TO N'C:\Program Files\Microsoft SQL Server\MSSQL10.SQL2008\MSSQL\DATA\Sample.Db_log.ldf', NOUNLOAD, REPLACE, STATS = 10 There is no lock or security issues, the data base doesn't exist on the server. I can't figure it out. Any ideas?

    Read the article

< Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >