Search Results

Search found 26517 results on 1061 pages for 'large directory'.

Page 375/1061 | < Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >

  • Linux Kernel - Refreshing VFS Dentry Cache

    - by Mike D
    I wrote a system call that opens a directory and gets the file object and the dentry struct. Im trying to list all entries including entries in subdirectories using the list_for_each() macro. The problem is its only displaying whats currently in the dentry cache. If I open the directory with nautilus then rerun the system call, all the entries are listed. Is there a way to check the exact list of entries or refresh the cache? f = s_open(tpath); fle = fget(f); d = fle->f_path.dentry; list_for_each ( position , &d->d_subdirs ) { dtmp = list_entry(position, struct dentry, d_u.d_child); ... } sys_close(f);

    Read the article

  • NDK do not find the standard C++ libraries

    - by Marcos Vasconcelos
    Hi, I'm trying to compile a native program for android. But when runnning the ndk-build command I got the following result. /home/marcos/dev/workspace/rmsdk.native.wraper/jni/include-all/uft_alloc.h:26:21: error: stdexcept: No such file or directory /home/marcos/dev/workspace/rmsdk.native.wraper/jni/include-all/uft_alloc.h:27:18: error: limits: No such file or directory stdexcept and limits are part of the std C++ lib. This is my Android.mk LOCAL_PATH := $(call my-dir) MY_PATH := $(LOCAL_PATH) include $(call all-subdir-makefiles) LOCAL_PATH := $(MY_PATH) include $(CLEAR_VARS) LOCAL_LDLIBS := -llog LOCAL_MODULE := rmsdk LOCAL_SRC_FILES := curlnetprovider.cpp RMServices.cpp LOCAL_C_INCLUDES := $(LOCAL_PATH)/include-all LOCAL_STATIC_LIBRARIES := adept cryptopenssl curl dp expat fonts hobbes jpeg mschema png t3 xml zlib include $(BUILD_SHARED_LIBRARY) I should explicit tell that it's a C++ source?

    Read the article

  • ASP.NET - Missing #includes cause compilation errors: Failed to map the path '...'

    - by frankadelic
    I have an ASP.NET application which features some server-side includes. For example: <!--#include virtual="/scripts.inc" --> These files are not present in my ASP.NET website project because my website starts in a virtual directory: /path-to-my-application When I choose Build Web Site, I get this error: Failed to map the path '/scripts.inc' Visual Studio cannot resolve these include files that are defined at the root directory level. They are not visible in the website project. Aside from manually commenting out the #include references, is there any way I can get the website to build? Can I force Visual Studio to ignore those errors and compile the site? Once the website is pushed out to IIS, there is no problem, because all the #include files are in place. NOTE - Web Controls are not an option for this application. Please assume #include files are a requirement. Also, I cannot move the include files since they are used by other applications.

    Read the article

  • Maven assembly - Error reading assemblies

    - by Laurent
    Dear all, I have defined a personalized jar-with-dependencies assembly descriptor. However, when I execute it with mvn assembly:assembly, I get : ... [INFO] META-INF/ already added, skipping [INFO] META-INF/MANIFEST.MF already added, skipping [INFO] javax/ already added, skipping [INFO] META-INF/ already added, skipping [INFO] META-INF/MANIFEST.MF already added, skipping [INFO] META-INF/maven/ already added, skipping [INFO] [assembly:assembly {execution: default-cli}] [INFO] ------------------------------------------------------------------------ [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error reading assemblies: No assembly descriptors found. My jar-with-dependencies.xml is in src/main/resources/assemblies/. My assembly descriptor is the following : <?xml version='1.0' encoding='UTF-8'?> <assembly> <id>jar-with-dependencies</id> <formats> <format>jar</format> </formats> <dependencySets> <dependencySet> <scope>runtime</scope> <unpack>true</unpack> <unpackOptions> <excludes> <exclude>**/LICENSE*</exclude> <exclude>**/README*</exclude> </excludes> </unpackOptions> </dependencySet> </dependencySets> <fileSets> <fileSet> <directory>${project.build.outputDirectory}</directory> <outputDirectory>/</outputDirectory> </fileSet> <fileSet> <directory>src/main/resources/META-INF/services</directory> <outputDirectory>META-INF/services</outputDirectory> </fileSet> </fileSets> </assembly> And my project pom.xml is : <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-assembly-plugin</artifactId> <version>2.2-beta-5</version> <executions> <execution> <id>jar-with-dependencies</id> <phase>package</phase> <goals> <goal>single</goal> </goals> <configuration> <descriptors> <descriptor>jar-with-dependencies.xml</descriptor> </descriptors> <archive> <manifest> <mainClass>org.my.app.HowTo</mainClass> </manifest> </archive> </configuration> </execution> </executions> </plugin> When mvn assembly:assembly is performed, dependencies are unpacked and I get the previous error when unpack has finished. Moreover, if I execute mvn -e assembly:assembly it is say that no descriptors has been found, however it try to unpack dependencies and a JAR with dependencies is created but it doesn't contain META-INF/services/* as specified in descriptor : [ERROR] BUILD ERROR [INFO] ------------------------------------------------------------------------ [INFO] Error reading assemblies: No assembly descriptors found. [INFO] ------------------------------------------------------------------------ [INFO] Trace org.apache.maven.lifecycle.LifecycleExecutionException: Error reading assemblies: No assembly descriptors found. at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:719) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandaloneGoal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(DefaultLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHandleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegments(DefaultLifecycleExecutor.java:284) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLifecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:60) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: org.apache.maven.plugin.MojoExecutionException: Error reading assemblies: No assembly descriptors found. at org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:356) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPluginManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(DefaultLifecycleExecutor.java:694) ... 17 more Caused by: org.apache.maven.plugin.assembly.io.AssemblyReadException: No assembly descriptors found. at org.apache.maven.plugin.assembly.io.DefaultAssemblyReader.readAssemblies(DefaultAssemblyReader.java:206) at org.apache.maven.plugin.assembly.mojos.AbstractAssemblyMojo.execute(AbstractAssemblyMojo.java:352) ... 19 more I don't see my error. Does someone has a solution ? Kind Regards Laurent

    Read the article

  • How do I get Flex Builder to use the selected framework?

    - by Michael Prescott
    I'm attempting to create a Flex Project that will cause the Flash Player to cache the Flex framework. Flex Builder comes with Flex SDK 3.2.0.3958 and setting the Framework Linkage to use Runtime shared Library (RSL) under Project Properties - Flex Build Path will separate the framework from my main application and I see that my project's bin-debug directory contains framework_3.2.0.3958.swf and *.swz for distribution. Flex SDK 3.4 fixes a few bugs, so I configured it as another available sdk and set it as the default SDK. When I compile, I expect the bin-debug directory to contain framework_3.4.0.9271.swf and *.swz; however, Flex Builder is still writing framework_3.2.0.3958.swf and *.swz. How do I configure Flex Builder to package the correct framework files for Flash Player caching?

    Read the article

  • ASP.NET HTTPHandler

    - by josephj1989
    I have an HttpHandler class (implements IHttphandler) where the path defined for the handler in web.config is *.jpg. I am requesting a Jpg image in my page. Within the HTTP Handler I am writing to a file in the filesystem. By mistake I was trying to write to a non existant directory. This should have thrown an exception but the execution simply proceeds.Ofcourse no file is writte. But if I give a proper directory the file is written correctly.Is there anything special about HttpHandler Exceptions. See part of the code public void ProcessRequest(HttpContext context){ File.WriteAllLines(context.Request.ApplicationPath+@"\"+"resul.log",new string[]{"Entered JPG Handler"});

    Read the article

  • replace file with hardlink to another file atomically

    - by Ben Clifford
    I have two directory entries, a and b. Before, a and b point to different inodes. Afterwards, I want b to point to the same inode as a does. I want this to be safe - by which I mean if I fail somewhere, b either points to its original inode or the a inode. most especially I don't want to end up with b disappearing. mv is atomic when overwriting. ln appears to not work when the destination already exists. so it looks like i can say: ln a tmp mv tmp b which in case of failure will leave a 'tmp' file around, which is undesirable but not a disaster. Is there a better way to do this? (what I'm actually trying to do is replace files that have identical content with a single inode containing that content, shared between all directory entries)

    Read the article

  • Silverlight: Why can't silverlight see my xaml file referenced in MergedDictionary?

    - by Sergio
    Hi there, I have a silverlight application with a number of styles defined in a seperate xaml file under the directory /Styles. My App.xaml looks like this: <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="/Styles/Legend.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> Legend.xaml has its build action set to Content and 'Do Not Copy' as the copy to output directory setting. The message I'm getting in blend is: An error occurred while finding the resource dictionary "/Styles/Legend.xaml" Thanks in advance!

    Read the article

  • How do I change apache2 DocumentRoot (default snow leopard install) and not get "You don't have perm

    - by David Peek
    I'm trying to point DocumentRoot at a directory in my user folder. While I can happily point DocumentRoot at /Library/WebServer/Documents and ~/Sites I keep getting "You don't have permission to access / on this server." when I point it anywhere else. OK, I just found a solution mid-question (stack overflow is just that good) by changing the user/group apache runs under to myuser/admin. I'm sure there must be a better way though. Surely some kind of permissions magic on the directory I'm pointing at?

    Read the article

  • OpenCV 2.0 and Python

    - by Jive Dadson
    I cannot get the example Python programs to run. When executing the Python command "from opencv import cv" I get the message "ImportError: No module named _cv". There is a stale _cv.pyd in the site-packages directory, but no _cv.py anywhere. See step 5 below. MS Windows XP, VC++ 2008, Python 2.6, OpenCV 2.0 Here's what I have done. Downloaded and ran the MS Windows installer for OpenCV2.0. Downloaded and installed CMake Downloaded and installed SWIG Ran CMake. After unchecking "ENABLE_OPENMP" in the CMake GUI, I was able to build OpenCV using INSTALL.vcproj and BUILD_ALL.vcproj. I do not know what the difference is, so I built everything under both of those project files. The C example programs run fine. Copied contents of OpenCV2.0/Python2.6/lib/site-packages to my installed Python2.6/lib/site-packages directory. I notice that it contains an old _cv.pyd and an old libcv.dll.a.

    Read the article

  • Why can't WordPress create directories when uploading themes?

    - by Arman
    I can't workout how to solve this problem so wordpress would let me upload themes. I have a fresh copy of Fedora 17 installed on my dev machine. I then installed mysql using: yum install mysql mysql-server. Next I installed WordPress which also installs apache and php: yum install wordpress I can go to http://localhost/wordpress and see WordPress working. But when I try tried to install my theme it asked for ftp credentials. I then updated the wp-config.php file and set the FS_METHOD constant to direct. Now it doesn't ask for ftp credentials but it gives me this error: Could not create directory. /usr/share/wordpress/wp-content/themes/my-theme-name/ httpd service is running under 'apache' user and 'apache' group. The /usr/share/wordpress/ directory is recursively own by 'apache' user and 'apache' group too. I've even set the permissions to 777 (also recursively) and even then I keep getting the same error as above. How can I solve this problem?

    Read the article

  • 503 (Server Unavailable) WebException when loading local XHTML file in Visual C# 2008

    - by kcoppock
    Hello! So I'm currently working on an ePub reader application, and I've been reading through a bunch of regular XML files just fine with System.Xml and XmlDocument: XmlDocument xmldoc = new XmlDocument(); xmldoc.Load(Path.Combine(Directory.GetCurrentDirectory(), "META-INF/container.xml")); XmlNodeList xnl = xmldoc.GetElementsByTagName("rootfile"); However, now I'm trying to open the XHTML files that contain the actual book text, and they're XHTML files. Now I don't really know the difference between the two, but I'm getting the following error with this code (in the same document, using the same XmlDocument and XmlNodeList variable) xmldoc.Load(Path.Combine(Directory.GetCurrentDirectory(), "OEBPS/part1.xhtml")); "WebException was unhandled: The remote server returned an error: (503) Server Unavailable" It's a local document, so I'm not understanding why it's giving this error? Any help would be greatly appreciated. :) I've got the full source code here if it helps: http://drop.io/epubtest (I know the ePubConstructor.ParseDocument() method is horribly messy, I'm just trying to get it working at the moment before I split it into classes)

    Read the article

  • Software Engineering Practices &ndash; Different Projects should have different maturity levels

    - by Dylan Smith
    I’ve had a lot of discussions at the office lately about the drastically different sets of software engineering practices used on our various projects, if what we are doing is appropriate, and what factors should you be considering when determining what practices are most appropriate in a given context. I wanted to write up my thoughts in a little more detail on this subject, so here we go: If you compare any two software projects (specifically comparing their codebases) you’ll often see very different levels of maturity in the software engineering practices employed. By software engineering practices, I’m specifically referring to the quality of the code and the amount of technical debt present in the project. Things such as Test Driven Development, Domain Driven Design, Behavior Driven Development, proper adherence to the SOLID principles, etc. are all practices that you would expect at the mature end of the spectrum. At the other end of the spectrum would be the quick-and-dirty solutions that are done using something like an Access Database, Excel Spreadsheet, or maybe some quick “drag-and-drop coding”. For this blog post I’m going to refer to this as the Software Engineering Maturity Spectrum (SEMS). I believe there is a time and a place for projects at every part of that SEMS. The risks and costs associated with under-engineering solutions have been written about a million times over so I won’t bother going into them again here, but there are also (unnecessary) costs with over-engineering a solution. Sometimes putting multiple layers, and IoC containers, and abstracting out the persistence, etc is complete overkill if a one-time use Access database could solve the problem perfectly well. A lot of software developers I talk to seem to automatically jump to the very right-hand side of this SEMS in everything they do. A common rationalization I hear is that it may seem like a small trivial application today, but these things always grow and stick around for many years, then you’re stuck maintaining a big ball of mud. I think this is a cop-out. Sure you can’t always anticipate how an application will be used or grow over its lifetime (can you ever??), but that doesn’t mean you can’t manage it and evolve the underlying software architecture as necessary (even if that means having to toss the code out and re-write it at some point…maybe even multiple times). My thoughts are that we should be making a conscious decision around the start of each project approximately where on the SEMS we want the project to exist. I believe this decision should be based on 3 factors: 1. Importance - How important to the business is this application? What is the impact if the application were to suddenly stop working? 2. Complexity - How complex is the application functionality? 3. Life-Expectancy - How long is this application expected to be in use? Is this a one-time use application, does it fill a short-term need, or is it more strategic and is expected to be in-use for many years to come? Of course this isn’t an exact science. You can’t say that Project X should be at the 73% mark on the SEMS and expect that to be helpful. My point is not that you need to precisely figure out what point on the SEMS the project should be at then translate that into some prescriptive set of practices and techniques you should be using. Rather my point is that we need to be aware that there is a spectrum, and that not everything is going to be (or should be) at the edges of that spectrum, indeed a large number of projects should probably fall somewhere within the middle; and different projects should adopt a different level of software engineering practices and maturity levels based on the needs of that project. To give an example of this way of thinking from my day job: Every couple of years my company plans and hosts a large event where ~400 of our customers all fly in to one location for a multi-day event with various activities. We have some staff whose job it is to organize the logistics of this event, which includes tracking which flights everybody is booked on, arranging for transportation to/from airports, arranging for hotel rooms, name tags, etc The last time we arranged this event all these various pieces of data were tracked in separate spreadsheets and reconciliation and cross-referencing of all the data was literally done by hand using printed copies of the spreadsheets and several people sitting around a table going down each list row by row. Obviously there is some room for improvement in how we are using software to manage the event’s logistics. The next time this event occurs we plan to provide the event planning staff with a more intelligent tool (either an Excel spreadsheet or probably an Access database) that can track all the information in one location and make sure that the various pieces of data are properly linked together (so for example if a person cancels you only need to delete them from one place, and not a dozen separate lists). This solution would fall at or near the very left end of the SEMS meaning that we will just quickly create something with very little attention paid to using mature software engineering practices. If we examine this project against the 3 criteria I listed above for determining it’s place within the SEMS we can see why: Importance – If this application were to stop working the business doesn’t grind to a halt, revenue doesn’t stop, and in fact our customers wouldn’t even notice since it isn’t a customer facing application. The impact would simply be more work for our event planning staff as they revert back to the previous way of doing things (assuming we don’t have any data loss). Complexity – The use cases for this project are pretty straightforward. It simply needs to manage several lists of data, and link them together appropriately. Precisely the task that access (and/or Excel) can do with minimal custom development required. Life-Expectancy – For this specific project we’re only planning to create something to be used for the one event (we only hold these events every 2 years). If it works well this may change (see below). Let’s assume we hack something out quickly and it works great when we plan the next event. We may decide that we want to make some tweaks to the tool and adopt it for planning all future events of this nature. In that case we should examine where the current application is on the SEMS, and make a conscious decision whether something needs to be done to move it further to the right based on the new objectives and goals for this application. This may mean scrapping the access database and re-writing it as an actual web or windows application. In this case, the life-expectancy changed, but let’s assume the importance and complexity didn’t change all that much. We can still probably get away with not adopting a lot of the so-called “best practices”. For example, we can probably still use some of the RAD tooling available and might have an Autonomous View style design that connects directly to the database and binds to typed datasets (we might even choose to simply leave it as an access database and continue using it; this is a decision that needs to be made on a case-by-case basis). At Anvil Digital we have aspirations to become a primarily product-based company. So let’s say we use this tool to plan a handful of events internally, and everybody loves it. Maybe a couple years down the road we decide we want to package the tool up and sell it as a product to some of our customers. In this case the project objectives/goals change quite drastically. Now the tool becomes a source of revenue, and the impact of it suddenly stopping working is significantly less acceptable. Also as we hold focus groups, and gather feedback from customers and potential customers there’s a pretty good chance the feature-set and complexity will have to grow considerably from when we were using it only internally for planning a small handful of events for one company. In this fictional scenario I would expect the target on the SEMS to jump to the far right. Depending on how we implemented the previous release we may be able to refactor and evolve the existing codebase to introduce a more layered architecture, a robust set of automated tests, introduce a proper ORM and IoC container, etc. More likely in this example the jump along the SEMS would be so large we’d probably end up scrapping the current code and re-writing. Although, if it was a slow phased roll-out to only a handful of customers, where we collected feedback, made some tweaks, and then rolled out to a couple more customers, we may be able to slowly refactor and evolve the code over time rather than tossing it out and starting from scratch. The key point I’m trying to get across is not that you should be throwing out your code and starting from scratch all the time. But rather that you should be aware of when and how the context and objectives around a project changes and periodically re-assess where the project currently falls on the SEMS and whether that needs to be adjusted based on changing needs. Note: There is also the idea of “spectrum decay”. Since our industry is rapidly evolving, what we currently accept as mature software engineering practices (the right end of the SEMS) probably won’t be the same 3 years from now. If you have a project that you were to assess at somewhere around the 80% mark on the SEMS today, but don’t touch the code for 3 years and come back and re-assess its position, it will almost certainly have changed since the right end of the SEMS will have moved farther out (maybe the project is now only around 60% due to decay). Developer Skills Another important aspect to this whole discussion is around the skill sets of your architects and lead developers. When talking about the progression of a developers skills from junior->intermediate->senior->… they generally start by only being able to write code that belongs on the left side of the SEMS and as they gain more knowledge and skill they become capable of working at a higher and higher level along the SEMS. We all realize that the learning never stops, but eventually you’ll get to the point where you can comfortably develop at the right-end of the SEMS (the exact practices and techniques that translates to is constantly changing, but that’s not the point here). A critical skill that I’d love to see more evidence of in our industry is the most senior guys not only being able to work at the right-end of the SEMS, but more importantly be able to consciously work at any point along the SEMS as project needs dictate. An even more valuable skill would be if you could make the conscious decision to move a projects code further right on the SEMS (based on changing needs) and do so in an incremental manner without having to start from scratch. An exercise that I’m planning to go through with all of our projects here at Anvil in the near future is to map out where I believe each project currently falls within this SEMS, where I believe the project *should* be on the SEMS based on the business needs, and for those that don’t match up (i.e. most of them) come up with a plan to improve the situation.

    Read the article

  • How do you fix an SVN 409 Conflict Error

    - by NerdStarGamer
    I used to use SVN 1.4 on OS X Leopard and everything was fine. A couple of weeks ago I installed a fresh copy of OS X 10.6. The version of SVN that comes with Snow Leopard is 1.6.5. I went ahead and built my own copy with 1.6.6. I'm using the built in apache server and just hosting repositories locally. Everything appeared to work fine until I actually tried to commit something. Everytime I try to commit a change, I get the following message: Transmitting file data .svn: Commit failed (details follow): svn: MERGE of '/svn/svn2': 409 Conflict (http://localhost) This happens with my old repositories, so I created a couple of new ones. Same deal. I also tried using the 1.6.5 version that comes with the system...same. Finally, I tried upgrading to the latest stable SVN (1.6.9) and still got the same problem. The Apache error logs the following for each failed commit: [Mon Mar 29 19:53:10 2010] [error] [client ::1] Could not MERGE resource "/svn/svn2/!svn/act/d399326f-c20f-424f-bb68-3bb40503b5b1" into "/svn/svn2". [409, #0] [Mon Mar 29 19:53:10 2010] [error] [client ::1] An error occurred while committing the transaction. [409, #2] [Mon Mar 29 19:53:10 2010] [error] [client ::1] Can't open directory '/usr/local/svn/svn2/db/transactions/5-6.txn/\xeb\xa9\x0f\x1f': No such file or directory [409, #2] [Mon Mar 29 19:53:11 2010] [error] [client ::1] Could not DELETE /svn/svn2/!svn/act/d399326f-c20f-424f-bb68-3bb40503b5b1. [500, #0] [Mon Mar 29 19:53:11 2010] [error] [client ::1] could not open transaction. [500, #2] [Mon Mar 29 19:53:11 2010] [error] [client ::1] Can't open file '/usr/local/svn/svn2/db/transactions/5-6.txn/props': No such file or directory [500, #2] And from the access log: ::1 - - [30/Mar/2010:13:02:20 -0400] "OPTIONS /svn/svn2 HTTP/1.1" 401 401 ::1 - user [30/Mar/2010:13:02:20 -0400] "OPTIONS /svn/svn2 HTTP/1.1" 200 188 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2 HTTP/1.1" 207 647 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2 HTTP/1.1" 207 647 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2/!svn/vcc/default HTTP/1.1" 207 398 ::1 - user [30/Mar/2010:13:02:20 -0400] "PROPFIND /svn/svn2/!svn/bln/6 HTTP/1.1" 207 449 ::1 - user [30/Mar/2010:13:02:20 -0400] "REPORT /svn/svn2/!svn/vcc/default HTTP/1.1" 200 1172 Curiously, the commit does actually commit the changes, but the working copy doesn't see that and everything gets screwy. I've tried to Google every variation I can think of for this problem, but the search results are pretty much useless. I'm not using TortoiseSVN or anything special and commits fail on a new repository, so I know it's not a problem with my old repos. Any help would be greatly appreciated.

    Read the article

  • Missing ant-javamail.jar file on Macintosh

    - by Ken
    I've been running the built-in Ant from the command line on a Macintosh (10.5.5) and have run into some trouble with the Mail task. Running the Mail task produces the following message: [mail] Failed to initialise MIME mail: org.apache.tools.ant.taskdefs.email.MimeMailer This is most likely due to a missing ant-javamail.jar file in the /usr/share/ant/lib directory. I see a "ant-javamail-1.7.0.pom" file in this directory but not the appropriate jar file. Anyone know why this jar file might be missing and what the best way to resolve the problem is?

    Read the article

  • Web Deployment Projects for VS2010 on build server failing with Error MSB4086

    - by SteveBering
    When I upgraded my Web Deployment Project from VS2008 to the VS2010 beta version, I was able to execute the build locally on my development box. However, when I tried to execute the build on our TeamCity build server, I began getting the following exception: C:\Program Files\MSBuild\Microsoft\WebDeployment\v10.0\Microsoft.WebDeployment.targets(162, 37): error MSB4086: A numeric comparison was attempted on "$(_SourceWebProjectPath.Length)" that evaluates to "" instead of a number, in condition "'$(_SourceWebProjectPath)' != '' And $(_SourceWebProjectPath.Length) >= 4)". I did install the Web Deployment Project addin on my build server and I did copy over the C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications directory on my development box to the C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\ directory on the build server. Note: My dev box is 64bit and the build server 32bit. I can't figure out why this is behaving differently on the build server than on my dev machine. Anyone have any ideas? Thanks, Steve

    Read the article

  • How to get GWT 2.0 and Restlet 2.0 to play nice

    - by Holograham
    Hello, I am having trouble getting Restlet to play nice with GWT in the same project. I have been trying the examples from Restlets website to no avail I am using Eclipse, Maven2 plugin, GWT, and Restlet GWT. I have never used server side code in this GWT project before and I know there is some custom setup involved. I am deploying locally for the time being using the built in Jetty in GWT Hosted Mode. I can get my front end to display but my back end is not executing. I did add this to my web.xml file which is from the example (for the time being I am trying to drop the example into my project). <servlet> <servlet-name>adapter</servlet-name> <servlet-class>org.restlet.ext.gwt.GwtShellServletWrapper</servlet-class> <init-param> <param-name>org.restlet.application</param-name> <param-value>org.restlet.example.gwt.server.TestServerApplication</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>adapter</servlet-name> <url-pattern>/*</url-pattern> </servlet-mapping> I did notice that my web.xml file is located separately from my war deployment directory. My project dir structure is setup as follows. Project Root src/main/java src/main/resources src/main/test JRE Maven Dependecies GWT SDK src main webapp WEB-INF web.xml target war <my project dir> WEB-INF lib pom.xml So there is no web.xml under my war files WEB-INF directory. I am new to this type of application so it is most likely a matter of me not understanding the directory structure and how my GWT project is compiling into these dirs. Any help is appreciated! If I need to provide any more info let me know. Thanks

    Read the article

  • GD DLL installation

    - by smokinguns
    I'm using the JpGraph library(PHP graphing Lib). I'm getting the foll error: "JpGraph Error Your PHP installation does not seem to have the required GD 2.x library enabled. Please see the PHP documentation, "Image" section. Make sure that "php_gd2.dll" statement is uncomment in the [modules] section in the php.ini file" I uncommented the php_gd2.dll statement in php.ini(in C:\WINDOWS) and set the 'extension_dir' as extension_dir = C:\PHP\extensions (I have a extensions directory and put the php_gd2.dll in this directory ). I restarted IIS and when I fire up my webapp, it doesnt comeup. I commented out the php_gd2.dll and restarted IIS again. The webapp came up. I dont understand what I'm doing wrong. Here is the server config: Windows Server 2003, with IIS Version 6. The PHP version is 4.3.8

    Read the article

  • XAML Serialization object not using asp.net shadow copy

    - by mrwayne
    Hi, I'm having a problem where i use the XAML serializer / deserializer for a configuration type file that i have. The problem that i'm getting, is that the XAML serializer is returning objects from the assembly in the /Bin directory, while the rest of the web application is using assembly's stored in the ..../Temporary Files/.. directory. Is there any way to prevent this from happening? Is this a bug in the XAML serializer / assembly loading routines? Every time i compile i need to stop and start the asp.net application so the shadow copy and the bin are exactly the same file. Even when not making a change to the dll and recompiling still causes the problem. Any thoughts on how to get around this problem? Currently i've tried turning shadow copy off, but then i have the same problem of needing to shut down / start up the web app every time i compile. Help!

    Read the article

  • luntbuild + maven + findbugs = OutOfMemoryException

    - by Johannes
    Hi, I've been trying to get Luntbuild to generate and publish a project site for our project including a Findbugs report. All other reports (Cobertura, Surefire, JavaDoc, Dashboard) work fine, but Findbugs bails out with an OutOfMemoryException. Excluding findbugs from report generation fixes the build --- although obviously without a Findbugs report. The funny thing is that I first encountered this problem locally and solved it by setting MAVEN_OPTS=-Xmx512m. This does not seem to be enough in Luntbuild, however: setting that exact same option as an environment variable of my builder doesn't make a difference. I've found a couple of posts on the 'net stating you should also add -XX:MaxPermSize=512m to MAVEN_OPTS and/or pass -Dmaven.findbugs.jvmargs=-Xmx512m to mvn.bat. None of these (or their combination) seem to help though so any hints would be greatly appreciated! Cheers, Johannes Relevant information: Luntbuild is 1.5.6, Maven is 2.1.0, findbugs-maven-plugin is 2.0.1. This is the Findbugs section of the relevant pom.xml: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>findbugs-maven-plugin</artifactId> <version>2.0.1</version> </plugin> This is the head of my build log: User "luntbuild" started the build Perform checkout operation for VCS setting: Vcs name: Subversion Repository url base: http://some.repository.com/repo/ Repository layout: multiple Directory for trunk: trunk Directory for branches: branches Directory for tags: tags Username: xxxx Password:xxxx Web interface: ViewVC URL to web interface: http://some.repository.com/repo/ Quiet period: modules: Source path: somepath, Branch: , Label: , Destination path: somewhere Source path: somepath, Branch: somewhere1.0.x, Label: , Destination path: somewhere-1.0.x Source path: somepath, Branch: somewhere1.1.x, Label: , Destination path: somewhere-1.1.x Update url: http://some.repository.com/repo//trunk Duration of the checkout operation: 0 minutes Perform build with builder setting: Builder name: default Builder type: Maven2 builder Command to run Maven2: "C:\maven\apache-maven-2.1.0\bin\mvn.bat" -e -f somewhere\pom.xml -P site -Dmaven.test.skip=false -DbuildDate="Tue Nov 24 11:13:24 CET 2009" -DbuildVersion="site-core138" -Dsvn.username=xxxx -Dsvn.password=xxxx -DstagingSiteURL=file:///C:/luntbuild/core-reports -Dmaven.findbugs.jvmargs=-Xmx512m Directory to run Maven2 in: Goals to build: site:stage site:stage-deploy Build properties: buildVersion="site-core138" artifactsDir="C:\\Program Files\\Luntbuild\\publish\\somewhere\\site-core\\site-core138\\artifacts" buildDate="Tue Nov 24 11:13:24 CET 2009" junitHtmlReportDir="" Environment variables: MAVEN_OPTS="-Xmx512m -XX:MaxPermSize=512m" Build success condition: result==0 and builderLogContainsLine("INFO","BUILD SUCCESSFUL") Execute command: Executing 'C:\maven\apache-maven-2.1.0\bin\mvn.bat' with arguments: '-e' '-f' 'somewhere\pom.xml' '-P' 'site' '-Dmaven.test.skip=false' '-DbuildDate=Tue Nov 24 11:13:24 CET 2009' '-DbuildVersion=site-core138' '-Dsvn.username=xxxxxx' '-Dsvn.password=xxxxxx' '-DstagingSiteURL=file:///C:/luntbuild/reports' '-Dmaven.findbugs.jvmargs=-Xmx512m' '-DbuildVersion=site-core138' '-DartifactsDir=C:\\Program Files\\Luntbuild\\publish\\somewhere\\site-core\\site-core138\\artifacts' '-DbuildDate=Tue Nov 24 11:13:24 CET 2009' '-X' 'site:stage' 'site:stage-deploy' Tail of my build log: Analyzed: C:\luntbuild\somewhere-work\somewhere\...\SomeClass.class ... Analyzed: C:\luntbuild\somewhere-work\somewhere\...\target\classes Aux: C:\luntbuild\somewhere-work\somewhere\...\target\classes Aux: c:\maven\local-repo\...\somejar-1.1.1.1-SNAPSHOT.jar Aux: c:\maven\local-repo\commons-lang\commons-lang\2.3\commons-lang-2.3.jar .... Aux: c:\maven\local-repo\org\openoffice\ridl\3.1.0\ridl-3.1.0.jar Aux: c:\maven\local-repo\org\openoffice\unoil\3.1.0\unoil-3.1.0.jar [INFO] ------------------------------------------------------------------------ [ERROR] FATAL ERROR [INFO] ------------------------------------------------------------------------ [INFO] Java heap space [INFO] ------------------------------------------------------------------------ [DEBUG] Trace java.lang.OutOfMemoryError: Java heap space at java.util.HashMap.(HashMap.java:209) at edu.umd.cs.findbugs.ba.type.TypeAnalysis$CachedExceptionSet.(TypeAnalysis.java:114) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.getCachedExceptionSet(TypeAnalysis.java:688) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.computeThrownExceptionTypes(TypeAnalysis.java:439) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.transfer(TypeAnalysis.java:411) at edu.umd.cs.findbugs.ba.type.TypeAnalysis.transfer(TypeAnalysis.java:89) at edu.umd.cs.findbugs.ba.Dataflow.execute(Dataflow.java:356) at edu.umd.cs.findbugs.classfile.engine.bcel.TypeDataflowFactory.analyze(TypeDataflowFactory.java:82) at edu.umd.cs.findbugs.classfile.engine.bcel.TypeDataflowFactory.analyze(TypeDataflowFactory.java:44) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:331) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:281) at edu.umd.cs.findbugs.classfile.engine.bcel.CFGFactory.analyze(CFGFactory.java:173) at edu.umd.cs.findbugs.classfile.engine.bcel.CFGFactory.analyze(CFGFactory.java:64) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.analyzeMethod(AnalysisCache.java:331) at edu.umd.cs.findbugs.classfile.impl.AnalysisCache.getMethodAnalysis(AnalysisCache.java:281) at edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysis(ClassContext.java:937) at edu.umd.cs.findbugs.ba.ClassContext.getMethodAnalysisNoDataflowAnalysisException(ClassContext.java:921) at edu.umd.cs.findbugs.ba.ClassContext.getCFG(ClassContext.java:326) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.analyzeMethod(BuildUnconditionalParamDerefDatabase.java:103) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.considerMethod(BuildUnconditionalParamDerefDatabase.java:93) at edu.umd.cs.findbugs.detect.BuildUnconditionalParamDerefDatabase.visitClassContext(BuildUnconditionalParamDerefDatabase.java:79) at edu.umd.cs.findbugs.DetectorToDetector2Adapter.visitClass(DetectorToDetector2Adapter.java:68) at edu.umd.cs.findbugs.FindBugs2.analyzeApplication(FindBugs2.java:971) at edu.umd.cs.findbugs.FindBugs2.execute(FindBugs2.java:222) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.groovy.reflection.CachedMethod.invoke(CachedMethod.java:86) at groovy.lang.MetaMethod.doMethodInvoke(MetaMethod.java:230) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:912) at groovy.lang.MetaClassImpl.invokeMethod(MetaClassImpl.java:756) [INFO] ------------------------------------------------------------------------ [INFO] Total time: 17 minutes 16 seconds [INFO] Finished at: Tue Nov 24 11:31:23 CET 2009 [INFO] Final Memory: 70M/127M [INFO] ------------------------------------------------------------------------ Maven2 builder failed: build success condition not met! Note that apparently maven only uses 70MB... but that probably doesn't mean anything since the Findbugs plugin forks its own process.

    Read the article

  • Copy task in Visual Studio 2008

    - by Maurizio Reginelli
    This is a question related to a previous question I posted (see here). I need to configure a .csproj file to copy some files from a directory to another one (let me call them SOURCE and DESTINATION). I also need to change the SOURCE path depending from the configuration I'm using. For example, if I compile my project in Debug mode, the SOURCE path must contain a subfolder called Debug. I tried the solution proposed by Schmitt in the previous post, using $(ConfigurationName) to set the dynamic directory into the SOURCE path. When I opened the solution containing that project, a list of links to the source files appeared in the main tree of the project and they were correctly related to the Debug mode. But when I changed to the Release mode, I saw that the path of the linked source files were set again to the Debug version. Is there a way to specify a parameter in the Include attribute of the SourceFiles element? Thank you.

    Read the article

  • UIKit/UiKit.h missing, on a newer version

    - by letsee
    Dear Everyone, I've a application which has been written for 2.1. Now I'm running that app on xcode 3.2.5 and SDK 4.2. Here's the problem, When I try to Build and Run, I get the following error: UIKit.framework/UIKit.h: No such file or directory In file included from users/.../classes/Radio.m UIKit.framework/UIKit.h: no such file or directory in users/.../classes/Radio.h I don't know why I'm facing that error, because the UIKit.framework is included in my projects "Frameworks" group. I've updated the OS target and other similar options, and application runs clearly without UIKit. I would appreciate it if anyone could help me through. Regards,

    Read the article

  • Tools of the Trade

    - by Ajarn Mark Caldwell
    I got pretty excited a couple of days ago when my new laptop arrived. “The new phone books are here!  The new phone books are here!  I’m a somebody!” - Steve Martin in The Jerk It is a Dell Precision M4500 with an Intel i7 Core 2.8 GHZ running 64-bit Windows 7 with a 15.6” widescreen, 8 GB RAM, 256 GB SSD.  For some of you high fliers, this may be nothing to write home about, but compared to the 32–bit Windows XP laptop with 2 GB of RAM and a regular hard disk that I’m coming from, it’s a really nice step forward.  I won’t even bore you with the details of the desktop PC I was first given when I started here 5 1/2 years ago.  Let’s just say that things have improved.  One really nice thing is that while we are definitely running a lean and mean department in terms of staffing, my boss believes in supporting that lean staff with good tools in order to stay lean instead of having to spend even more money on additional employees.  Of course, that only goes so far, and at some point you have to add more people in order to get more work done, which is why we are bringing on-board a new employee and a new contract developer next week.  But that’s a different story for a different time. But the main topic for this post is to highlight the variety of tools that I use in my job and that you might find useful, too.  This is easy to do right now because the process of building up my new laptop from scratch has forced me to assemble a list of software that had to be installed and configured.  Keep in mind as you look through this list that I play many roles in our company.  My official title is Software Engineering Manager, but in addition to managing the team, I am also an active ASP.NET and SQL developer, the Database Administrator, and 50% of the SAN Administrator team.  So, without further ado, here are the tools and some comments about why I use them: Tool Purpose Virtual Clone Drive Easily mount an ISO image as a DVD Drive.  This is particularly handy when you are downloading disk images from Microsoft for your tools. SQL Server 2008 R2 Developer Edition We are migrating all of our active systems to SQL 2008 R2.  Developer Edition has all the features of Enterprise Edition, but intended for development use. SQL Server 2005 Developer Edition (BIDS ONLY) The migration to SSRS 2008 R2 is just getting started, and in the meantime, maintenance work still has to be done on the reports on our SQL 2005 server.  For some reason, you can’t use BIDS from 2008 to write reports for a 2005 server.  There is some different format and when you open 2005 reports in 2008 BIDS, it forces you to upgrade, and they can no longer be uploaded to a 2005 server.  Hopefully Microsoft will fix this soon in some manner similar to Visual Studio now allows you to pick which version of the .NET Framework you are coding against. Visual Studio 2010 Premium All of our application development is in ASP.NET, and we might as well use the tool designed for it. I’ve used a version of Visual Studio going all the way back to VB 6.0 and Visual Interdev. Vault Professional Client Several years ago we replaced Visual Source Safe with SourceGear Vault (then Fortress, and now Vault Pro), and I love it.  It is very reliable with low overhead - perfect for a small to medium size development team.  And being a small ISV, their support is exceptional. Red-Gate Developer Bundle with the SQL Source Control update for Vault I first used, and fell in love with, SQL Prompt shortly before Red-Gate bought it, and then Red-Gate’s first release made me love it even more.  SQL Refactor (which has since been rolled into the latest version of SQL Prompt) has saved me many hours and migraine’s trying to understand somebody else’s code when their indenting was nonexistant, or worse, irrational.  SQL Compare has been awesome for troubleshooting potential schema issues between different instances of system databases.  SQL Data Compare helped us identify the cause behind a bug which appeared in PROD but could not be reproduced in a nearly (but not quite exactly) identical copy in UAT.  And the newest tool we are embracing: SQL Source Control.  I blogged about it here (and here, and here) last December.  This is really going to help us keep each developer’s copy of the database in sync with one another. Fiddler Helps you watch the whole traffic stream on web visits.  Haven’t used it a lot, but it did help me track down some odd 404 errors we were finding in our own application logs.  Has some other JavaScript troubleshooting capabilities, but some of its usefulness has been supplanted by the Developer Tools option in IE8. Funduc Search & Replace Find any string anywhere in a mound of source code really, really fast.  Does RegEx searches, if you understand that foreign language.  Has really helped with some refactoring work to pinpoint, for example, everywhere a particular stored procedure is referenced, whether in .NET code or other SQL procedures (which we have in script files).  Provides in-context preview of the search results.  Fantastic tool, and a bargain price. SciTE SciTE is a Scintilla based Text Editor and it is a fantastic, light-weight tool for quickly reviewing (or writing) program code, SQL scripts, and extract files.  It has language-specific syntax highlighting.  I used it to write several batch and CMD programs a year ago, and to examine data extract files for exchanging information with other systems.  Extremely handy are the options to View End of Line and View Whitespace.  Ever receive a file that is supposed to use CRLF as an end-of-line marker, but really only has CRs?  SciTE will quickly make that visible. Infragistics Controls We do a lot of ASP.NET development, and frequently use the WebGrid, WebTab, and date picker controls.  We will likely be implementing the Hierarchical Data Grid soon.  Infragistics has control suites for WebForms, WinForms, Silverlight, and coming soon MVC/JQuery. WinZip - WITH Command-Line add-in The classic compression program with a great command-line interface that allows me to build those CMD (and soon PowerShell) programs for automated compression jobs.  Our versioned Build packages are zip files. XML Notepad Haven’t used this a lot myself, but one of my team really likes it for examining large XML files. LINQPad Again, haven’t used this one a lot, but it was recommended to me for learning and practicing my LINQ skills which will come in handy as we implement Entity Framework. SQL Sentry Plan Explorer SQL Server Show Plan on steroids.  Great for helping you focus on the parts of a large query that are of most importance.  Also great for just compressing the graphical plan into more readable layout. Araxis Merge A great DIFF and Merge tool.  SourceGear provides a great tool called DiffMerge that we use all the time, but occasionally, I like the cross-edit capabilities of Araxis Merge.  For a while, we also produced DIFF reports in HTML that showed all the changes that occurred between two releases.  This was most important when we were putting out very small, but very important hot fixes on a very politically hot system.  The reports produced by Araxis Merge gave the Director of IS assurance that we were not accidentally introducing ripples throughout the system with our releases. Idera SQL Admin Toolset A great collection of tools including a password checker to help analyze your SQL Server for weak user passwords, a Backup Status tool to quickly scan a large list of servers and databases to identify any that are overdue for backups.  Particularly helpful for highlighting new databases that have been deployed without getting included in your backup processing.  I also like Space Analyzer to keep an eye on disk space consumed by database files. Idera SQL Job Manager This free tool provides a nice calendar view of SQL Server Job Schedules, but to a degree, you also get what you pay for.  We will be purchasing SQL Sentry Event Manager later this year as an even better job schedule reviewer/manager.  But in the meantime, this at least gives me a good view on potential resource conflicts across multiple instances of SQL Server. DBFViewer 2000 I inherited a couple of FoxPro databases that I have to keep an eye on occasionally and have not yet been able to migrate them to SQL Server. Balsamiq Mockups We are still in evaluation-mode on this tool, but I really like it as a quick UI mockup tool that does not require Visual Studio, so someone other than a programmer can do UI design.  The interface looks hand-drawn which definitely has some psychological benefits when communicating to users, too. FeedDemon I have to stay on top of my WAY TOO MANY blog subscriptions somehow.  I may read blogs on a couple of different computers, and FeedDemon’s integration with Google Reader allows me to keep them all in sync.  I don’t particularly like the Google Reader interface, or the fact that it always wanted to mark articles as read just because I scrolled past them.  FeedDemon solves this problem for me, and provides a multi-tabbed interface which is good because fairly frequently one blog will link to something else I want to read, and I can end up with a half-dozen open tabs all from one article. Synergy+ In my office, I run four monitors across two computers all with one mouse and keyboard.  Synergy is the magic software that makes this work. TweetDeck I’m not the most active Tweeter in the world, but when I want to check-in with the Twitterverse, this really helps.  I have found the #sqlhelp and #PoshHelp hash tags particularly useful, and I also have columns setup to make it easy to monitor #sqlpass, #PASSProfDev, and short term events like #sqlsat68.   Whew!  That’s a lot.  No wonder it took me a couple of days to get everything setup the way I wanted it.  Oh, that and actually getting some work accomplished at the same time.  Anyway, I know that is a huge dump of info, and most people never make it here to the end, so for those who did, let me say, CONGRATULATIONS, you made it! I hope you’ll find a new tool or two to make your work life a little easier.

    Read the article

  • Unable to build mercurial on OSX - Python.h not found

    - by Oscar Reyes
    For what I've read I need Python-Dev, how do I install it on OSX? I think the problem I have, is, my Xcode was not properly installed, and I don't have the paths where I should. This previous question: http://stackoverflow.com/questions/2685887/where-is-gcc-on-osx-i-have-installed-xcode-already Was about I couldn't find gcc, now I can't find Python.h Should I just link my /Developer directory to somewhere else in /usr/ ??? This is my output: $ sudo easy_install mercurial Password: Searching for mercurial Reading http://pypi.python.org/simple/mercurial/ Reading http://www.selenic.com/mercurial Best match: mercurial 1.5.1 Downloading http://mercurial.selenic.com/release/mercurial-1.5.1.tar.gz Processing mercurial-1.5.1.tar.gz Running mercurial-1.5.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-_7RaTq/mercurial-1.5.1/egg-dist-tmp-l7JP3u mercurial/base85.c:12:20: error: Python.h: No such file or directory ... Thanks in advance.

    Read the article

  • trying to build Boost MPI, but the lib files are not created. What's going on?

    - by unknownthreat
    I am trying to run a program with Boost MPI, but the thing is I don't have the .lib. So I try to create one by following the instruction at http://www.boost.org/doc/libs/1_43_0/doc/html/mpi/getting_started.html#mpi.config The instruction says "For many users using LAM/MPI, MPICH, or OpenMPI, configuration is almost automatic", I got myself OpenMPI in C:\, but I didn't do anything more with it. Do we need to do anything with it? Beside that, another statement from the instruction: "If you don't already have a file user-config.jam in your home directory, copy tools/build/v2/user-config.jam there." Well, I simply do what it says. I got myself "user-config.jam" in C:\boost_1_43_0 along with "using mpi ;" into the file. Next, this is what I've done: bjam --with-mpi C:\boost_1_43_0>bjam --with-mpi WARNING: No python installation configured and autoconfiguration failed. See http://www.boost.org/libs/python/doc/building.html for configuration instructions or pass --without-python to suppress this message and silently skip all Boost.Python targets Building the Boost C++ Libraries. warning: skipping optional Message Passing Interface (MPI) library. note: to enable MPI support, add "using mpi ;" to user-config.jam. note: to suppress this message, pass "--without-mpi" to bjam. note: otherwise, you can safely ignore this message. warning: Unable to construct ./stage-unversioned warning: Unable to construct ./stage-unversioned Component configuration: - date_time : not building - filesystem : not building - graph : not building - graph_parallel : not building - iostreams : not building - math : not building - mpi : building - program_options : not building - python : not building - random : not building - regex : not building - serialization : not building - signals : not building - system : not building - test : not building - thread : not building - wave : not building ...found 1 target... The Boost C++ Libraries were successfully built! The following directory should be added to compiler include paths: C:\boost_1_43_0 The following directory should be added to linker library paths: C:\boost_1_43_0\stage\lib C:\boost_1_43_0> I see that there are many libs in C:\boost_1_43_0\stage\lib, but I see no trace of libboost_mpi-vc100-mt-1_43.lib or libboost_mpi-vc100-mt-gd-1_43.lib at all. These are the libraries required for linking in mpi applications. What could possibly gone wrong when libraries are not being built?

    Read the article

< Previous Page | 371 372 373 374 375 376 377 378 379 380 381 382  | Next Page >