Search Results

Search found 49518 results on 1981 pages for 'configuration files'.

Page 568/1981 | < Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >

  • autocomplete not working on one sever, works on others

    - by dogmatic69
    I have Ubuntu 10.10 x64 and x86 running on various servers and auto complete works on all of them bar one. The issue: apt-<tab> would show a list of options but sudo apt-<tab> would not. After fiddling with it for a few hours i've found that /etc/bash_autocomplete did not exist. on the broken server. Copying the one from a working one it now works. but still not properly. sudo apt-get ins<tab> does not show do anything. listing the files in /etc/bash_autocomplete.d/ on the working server has about 50 files, and the broken one only two or three. i dont think that i can just copy these files though as it might show commands for things that are not even installed. TL;DR autocomplete broken, how can i fix it. Seems like its disabled somewhere, why is this EDIT: Ok, it was not ever installed... $ sudo apt-get install bash-completion Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed bash-completion 0 upgraded, 1 newly installed, 0 to remove and 3 not upgraded. Need to get 140kB of archives. After this operation, 1,061kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu/ maverick-updates/main bash-completion all 1:1.2-2ubuntu1.1 [140kB] Fetched 140kB in 0s (174kB/s) Selecting previously deselected package bash-completion. (Reading database ... 23808 files and directories currently installed.) Unpacking bash-completion (from .../bash-completion_1%3a1.2-2ubuntu1.1_all.deb) ... Processing triggers for man-db ... Setting up bash-completion (1:1.2-2ubuntu1.1) ... its now kinda working, but still wonky... apt-get ins<tab> gives sudo apt-get insserv as the option. also apt-get install php5<tab> gives apt-get install php5/ not php5-* options.

    Read the article

  • Add Command prompt in VS 2008 Express Edition manually

    - by Kumar
    Hi all, To add the Command prompt in VS 2008 express edition, i have done the following steps: Tools-ExternalTools-Click on Add- Then I have entered the following information. Title: Visual Studio 2008 Command Prompt Command: cmd.exe Arguments: %comspec% /k ""C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"" x86 Initial Directory: $(ProjectDir) Then OK/Apply: After this when I went to Tools Menu and click on Visual Studio 2008 Command Prompt, command prompt open but got the following error message: '"C:\Program Files\Microsoft Visual Studio 9.0\VC\vcvarsall.bat"' is not recognized as an internal or external command, operable program or batch file. C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE Please somebody help me to fix this problem.. Or somebody teach me freshly how to add command prompt in Tools Menu manually in VS 2008 Express Edition. Thanks, Kumar

    Read the article

  • What happens when you uninstall a per-user installation?

    - by CraigJ
    What happens if an MSI installation is set to install as per-user, and 3 different users log on and each install the app? Will Windows Installer recognise that the same MSI has already been installed into Program Files and therefore it doesn't need to install it again? What happens if one of the 3 users then uninstalls the app while they are logged in? Will Windows Installer recognise that 2 other users still need the app to be installed and therefore leave alone the app folder in Program Files?

    Read the article

  • Lazarus: Can't find unit [unit] used by [program]

    - by Ree
    I'm trying to use an external library (wingraph) in a simple program. I have .o and .ppu files. I added the directory that contains them to the list of both "Other Unit Files" and "Include Files" paths under Project-Compiler Options. When building, I still get the error "Can't find unit wingraph used by [program]". The library is Windows specific and I'm compiling on Windows, too. What should I do to solve the problem? Note that I don't have extensive knowledge about Pascal itself nor its tools. I'm just trying to quickly help someone start using the library.

    Read the article

  • How do I detach a local SVN working copy?

    - by Simon A. Eugster
    I cannot just rm -rf $(find . -name '.svn'), because I've got some directories in my working copy which are unversioned (on svn:ignore) and at the same time working copies of other svn repositories. my-repo |+ directory ||- .svn (to delete) ||- files... |+ another_directory ||- .svn (to delete) ||- files... |+ directory_ignored (svn:ignore) ||- .svn (different working copy) ||- more files ... So I'd like to just tell subversion to remove all .svn directories belonging to this working copy only. Is this possible? The directory structure is quite complex, so doing it manually would really suck.

    Read the article

  • How to configure hbm2java and hbm2dao to add packagename to generated classes

    - by mmm
    Hi, I'm trying to configure hbm2java with maven to generate POJO classes and DAO objects. One of the issues I'm dealing with is package names aren't generated. I'm using the following pom for that: <execution> <id>hbm2java</id> <phase>generate-sources</phase> <goals> <goal>hbm2java</goal> </goals> <inherited>false</inherited> <configuration> <components> <component> <name>hbm2java</name> <implementation>configuration</implementation> </component> </components> <componentProperties> <packagename>package.name</packagename> <configurationfile>target/hibernate3/generated-mappings/hibernane.cfg.xml</configurationfile> </componentProperties> </configuration> </execution> Yet the generated code begins with the following: // default package // Generated 2010-05-17 13:11:51 by Hibernate Tools 3.2.2.GA /** * Messages generated by hbm2java */ public class Messages implements java.io.Serializable { Is there a way to force maven to generate the package part as defined in packagename?

    Read the article

  • Why does Perl's readdir() cache directory entries?

    - by Frank Straetz
    For some reason Perl keeps caching the directory entries I'm trying to read using readdir: opendir(SNIPPETS, $dir_snippets); # or die... while ( my $snippet = readdir(SNIPPETS) ) { print ">>>".$snippet."\n"; } closedir(SNIPPETS); Since my directory contains two files, test.pl and test.man, I'm expecting the following output: . .. test.pl test.man Unfortunately Perl returns a lot of files that have since vanished, for example because I tried to rename them. After I move test.pl to test.yeah Perl will return the following list: . .. test.pl test.yeah test.man What's the reason for this strange behaviour? The documentation for opendir, readdir and closedir doesn't mention some sort of caching mechanism. "ls -l" clearly lists only two files.

    Read the article

  • 503 (Server Unavailable) WebException when loading local XHTML file in Visual C# 2008

    - by kcoppock
    Hello! So I'm currently working on an ePub reader application, and I've been reading through a bunch of regular XML files just fine with System.Xml and XmlDocument: XmlDocument xmldoc = new XmlDocument(); xmldoc.Load(Path.Combine(Directory.GetCurrentDirectory(), "META-INF/container.xml")); XmlNodeList xnl = xmldoc.GetElementsByTagName("rootfile"); However, now I'm trying to open the XHTML files that contain the actual book text, and they're XHTML files. Now I don't really know the difference between the two, but I'm getting the following error with this code (in the same document, using the same XmlDocument and XmlNodeList variable) xmldoc.Load(Path.Combine(Directory.GetCurrentDirectory(), "OEBPS/part1.xhtml")); "WebException was unhandled: The remote server returned an error: (503) Server Unavailable" It's a local document, so I'm not understanding why it's giving this error? Any help would be greatly appreciated. :) I've got the full source code here if it helps: http://drop.io/epubtest (I know the ePubConstructor.ParseDocument() method is horribly messy, I'm just trying to get it working at the moment before I split it into classes)

    Read the article

  • script to recursively check for and select dependencies

    - by rp.sullivan
    I have written a script that does this but it is one of my first scripts ever so i am sure there is a better way:) Let me know how you would go about doing this. I'm looking for a simple yet efficient way to do this. Here is some important background info: ( It might be a little confusing but hopefully by the end it will make sense. ) 1) This image shows the structure/location of the relevant dirs and files. 2) The packages.file located at ./config/default/config/packages is a space delimited file. field5 is the "package name" which i will call $a for explanations sake. field4 is the name of the dir containing the $a.dir i will call $b field1 shows if the package is selected or not, "X"(capital x) for selected and "O"(capital o as in orange) for not selected. Here is an example of what the packages.file might contain: ... X ---3------ 104.800 database gdbm 1.8.3 / base/library CROSS 0 O -1---5---- 105.000 base libiconv 1.13.1 / base/tool CROSS 0 X 01---5---- 105.000 base pkgconfig 0.25 / base/tool CROSS 0 X -1-3------ 105.000 base texinfo 4.13a / base/tool CROSS DIETLIBC 0 O -----5---- 105.000 develop duma 2_5_15 / base/development CROSS NOPARALLEL 0 O -----5---- 105.000 develop electricfence 2_4_13 / base/development CROSS 0 O -----5---- 105.000 develop gnupth 2.0.7 / extra/development CROSS NOPARALLEL FPIC-QUIRK 0 ... 3) For almost every package listed in the "packages.file" there is a corresponding ".cache file" The .cache file for package $a would be located at ./package/$b/$a/$a.cache The .cache files contain a list of dependencies for that particular package. Here is an example of one of the .cache files might look like. Note that the dependencies are field2 of lines containing "[DEP]" These dependencies are all names of packages in the "package.file" [TIMESTAMP] 1134178701 Sat Dec 10 02:38:21 2005 [BUILDTIME] 295 (9) [SIZE] 11.64 MB, 191 files [DEP] 00-dirtree [DEP] bash [DEP] binutils [DEP] bzip2 [DEP] cf [DEP] coreutils ... So with all that in mind... I'm looking for a shell script that: From within the "main dir" Looks at the ./config/default/config/packages file and finds the "selected" packages and reads the corresponding .cache Then compiles a list of dependencies that excludes the already selected packages Then selects the dependencies (by changing field1 to X) in the ./config/default/config/packages file and repeats until all the dependencies are met Note: The script will ultimately end up in the "scripts dir" and be called from the "main dir". If this is not clear let me know what need clarification. For those interested I'm playing around with T2 SDE. If you are into playing around with linux it might be worth taking a look.

    Read the article

  • GhostScript font issues

    - by Robert
    I'm running GPL Ghostscript 8.70 (2009-07-31) on Windows XP. I have about 100 PDF files I've attempted to run through GS, but I'm having font-related issues on two separate groups of files from two different customers. I'm not sure if the issues could be related. Here are the two errors I receive: Loading Courier font from C:\Program Files\gs\fonts/cour.ttf... 2343384 986555 13583240 12261829 3 done. Using CourierNewPSMT font for Courier. Error: /rangecheck in --get-- Can't find CID font "Arial". Substituting CID font /Adobe-Identity for /Arial, see doc/Use.htm#CIDFontSubstitution. The substitute CID font "Adobe-Identity" is not provided either. Will exit with error. Error: /undefined in findresource I've tried just about everything I can think of with fontmap and cidfmap. Does anyone out there have a solution?

    Read the article

  • Setting custom behaviour via .config file - why doesn't this work?

    - by Andrew Shepherd
    I am attempting to insert a custom behavior into my service client, following the example here. I appear to be following all of the steps, but I am getting a ConfigurationErrorsException. Is there anyone more experienced than me who can spot what I'm doing wrong? Here is the entire app.config file. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <behaviors> <endpointBehaviors> <behavior name="ClientLoggingEndpointBehaviour"> <myLoggerExtension /> </behavior> </endpointBehaviors> </behaviors> <extensions> <behaviorExtensions> <add name="myLoggerExtension" type="ChatClient.ClientLoggingEndpointBehaviourExtension, ChatClient, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null"/> </behaviorExtensions> </extensions> <bindings> </bindings> <client> <endpoint behaviorConfiguration="ClientLoggingEndpointBehaviour" name="ChatRoomClientEndpoint" address="http://localhost:8016/ChatRoom" binding="wsDualHttpBinding" contract="ChatRoomLib.IChatRoom" /> </client> </system.serviceModel> </configuration> Here is the exception message: An error occurred creating the configuration section handler for system.serviceModel/behaviors: Extension element 'myLoggerExtension' cannot be added to this element. Verify that the extension is registered in the extension collection at system.serviceModel/extensions/behaviorExtensions. Parameter name: element (C:\Documents and Settings\Andrew Shepherd\My Documents\Visual Studio 2008\Projects\WcfPractice\ChatClient\bin\Debug\ChatClient.vshost.exe.config line 5) I know that I've correctly written the reference to the ClientLoggingEndpointBehaviourExtensionobject, because through the debugger I can see it being instantiated.

    Read the article

  • Logging Virus Definition Updates for MS Security Essentials in The Security Event Log

    - by Steve
    I would like to log a security in event in Windows 7 whenever the Microsoft Security Essentials 2 virus definition files are updates, deleted, or changed. I was expecting to do this with an Audit setting on one of the MS Security Essentials folders but I wasn't sure which one and how to avoid getting swamped with messages. What folder or files should I audit to track definition updates (or corruption) in the security events or is there a better approach?

    Read the article

  • how to stop homegroup sharing folders?

    - by srisar
    hi, i have homegroup on my pc and laptop, both running windows 7 , i can share the folders & files easily, but the problem is i cant stop sharing the folder. even i went to computer manage and stop sharing from there, but inside the homegroup the "stopped" share files are still showing. but now i cant open them because its showing the network resource is unavailable. but still the folders are showing how to hide them?

    Read the article

  • trigger config transformation in TFS 2010 or msbuild

    - by grenade
    I'm attempting to make use of configuration transformations in a continuous integration environment. I need a way to tell the TFS build agent to perform the transformations. I was kind of hoping it would just work after discovering the config transform files (web.qa-release.config, web.production-release.config, etc...). But it doesn't. I have a TFS build definition that builds the right configurations (qa-release, production-release, etc...) and I have some specific .proj files that get built within these definitions and those contain some environment specific parameters eg: <PropertyGroup Condition=" '$(Configuration)'=='production-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">qa.web</TargetHost> ... </PropertyGroup> <PropertyGroup Condition=" '$(Configuration)'=='qa-release' "> <TargetHost Condition=" '$(TargetHost)'=='' ">production.web</TargetHost> ... </PropertyGroup> I know from the output that the correct configurations are being built. Now I just need to learn how to trigger the config transformations. Is there some hocus pocus that I can add to the final .proj in the build to kick off the transform and blow away the individual transform files?

    Read the article

  • Web Deployment Projects for VS2010 on build server failing with Error MSB4086

    - by SteveBering
    When I upgraded my Web Deployment Project from VS2008 to the VS2010 beta version, I was able to execute the build locally on my development box. However, when I tried to execute the build on our TeamCity build server, I began getting the following exception: C:\Program Files\MSBuild\Microsoft\WebDeployment\v10.0\Microsoft.WebDeployment.targets(162, 37): error MSB4086: A numeric comparison was attempted on "$(_SourceWebProjectPath.Length)" that evaluates to "" instead of a number, in condition "'$(_SourceWebProjectPath)' != '' And $(_SourceWebProjectPath.Length) >= 4)". I did install the Web Deployment Project addin on my build server and I did copy over the C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v10.0\WebApplications directory on my development box to the C:\Program Files\MSBuild\Microsoft\VisualStudio\v10.0\ directory on the build server. Note: My dev box is 64bit and the build server 32bit. I can't figure out why this is behaving differently on the build server than on my dev machine. Anyone have any ideas? Thanks, Steve

    Read the article

  • chkdsk, SeaTools, and "does not have enough space to replace bad clusters"

    - by Zian Choy
    When I tried to do a Windows Vista Complete PC Backup, I received an error message that blathered about bad sectors. Then, when I ran chkdsk /r on the destination drive, this is what I got: C:\Windows\system32>chkdsk /R E: The type of the file system is NTFS. Volume label is Desktop Backup. CHKDSK is verifying files (stage 1 of 5)... 822016 file records processed. File verification completed. 1 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 5)... 848938 index entries processed. Index verification completed. 0 unindexed files processed. CHKDSK is verifying security descriptors (stage 3 of 5)... 822016 security descriptors processed. Security descriptor verification completed. 13461 data files processed. CHKDSK is verifying file data (stage 4 of 5)... The disk does not have enough space to replace bad clusters detected in file 239649 of name . The disk does not have enough space to replace bad clusters detected in file 239650 of name . The disk does not have enough space to replace bad clusters detected in file 239651 of name . An unspecified error occurred.f 822000 files processed) Yet, when I ran the SeaTools short & long generic tests on the Seagate disk, I didn't receive any errors. I know that I could reformat the disk and try running chkdsk /r again but I'd prefer to avoid waiting 4 hours in the hope that the problem was magically fixed. On the other hand, if I RmA the drive to Seagate, I have no SeaTools error number to use and they may claim that the drive is just fine. What should I try to do next? Side frustration: There is plenty of free hard drive space. The E: partition has 182 GB free.

    Read the article

  • How can I inject an object into an WCF IErrorHandler implementation with Castle Windsor?

    - by Michael Johnson
    I'm developing a set of services using WCF. The application is doing dependency injection with Castle Windsor. I've added an IErrorHandler implementation that is added to services via an attribute. Everything is working thus far. The IErrorHandler object (of a class called FaultHandler is being applied properly and invoked. Now I'm adding logging. Castle Windsor is set up to inject the logger object (an instance of IOurLogger). This is working. But when I try to add it to FaultHandler my logger is null. The code for FaultHandler looks something like this: class FaultHandler : IErrorHandler { public IOurLogger logger { get; set; } public bool HandleError(Exception error) { logger.Write("Exception type {0}. Message: {1}", error.GetType(), error.Message); // Let WCF handle things its way. We only want to log. return false; } public void ProvideFault(Exception error, MessageVersion version, Message fault) { } } This throws it's own exception, since logger is null when HandleError() is called. The logger is being successfully injected into the service itself and is usable there, but for some reason I can't use it in FaultHandler. Update: Here is the relevant part of the Windsor configuration file (edited to protect the innocent): <configuration> <components> <component id="Logger" service="Our.Namespace.IOurLogger, Our.Namespace" type="Our.Namespace.OurLogger, Our.Namespace" /> </components> </configuration>

    Read the article

  • web.config, configSource, and "The 'xxx' element is not declared" warning.

    - by UpTheCreek
    I have broken down the horribly unwieldy web.config file into individual files for some of the sections (e.g. connectionStrings, authentication, pages etc.) using the configSource attribute. This is working file, but the individual xml files that hold the section 'snippets' cause warnings in VS. For example, a file named roleManager.config is used for the role manager section, and looks like this: <roleManager enabled="false"> </rolemanager> However I get a blue squiggle under the roleManager element in VS, and the following warning: The 'roleManager' element is not declared I guess this is something to do with valid xml and schemas etc. Is there an easy way to fix this? Something I can add to the individual files? Thanks P.S. I have heard that it is bad practice to break the web.config file out like this. But don't really understand why - can anyone illuminate me?

    Read the article

  • Auditing events 4656 and 4658 on Windows folder on Server 2008

    - by PCurd
    During an overnight system state backup we are seeing thousands of success audit events (4656, 4658) on the folder c:\windows\servicing, system32 and others in the windows folder. We use file success auditing on some files so I can't disable it but this deluge is filling up the logs and making reporting tricky. What is the harm of changing the auditing settings on the windows folder? What are the recommended settings to put on the files for those people doing system state backups? Thanks,

    Read the article

  • UniqueConstraint in EmbeddedConfiguration

    - by LantisGaius
    I just started using db4o on C#, and I'm having trouble setting the UniqueConstraint on the DB.. here's the db4o configuration static IObjectContainer db = Db4oEmbedded.OpenFile(dbase.Configuration(), "data.db4o"); static IEmbeddedConfiguration Configuration() { IEmbeddedConfiguration dbConfig = Db4oEmbedded.NewConfiguration(); // Initialize Replication dbConfig.File.GenerateUUIDs = ConfigScope.Globally; dbConfig.File.GenerateVersionNumbers = ConfigScope.Globally; // Initialize Indexes dbConfig.Common.ObjectClass(typeof(DAObs.Environment)).ObjectField("Key").Indexed(true); dbConfig.Common.Add(new Db4objects.Db4o.Constraints.UniqueFieldValueConstraint(typeof(DAObs.Environment), "Key")); return dbConfig; } and the object to serialize: class Environment { public string Key { get; set; } public string Value { get; set; } } everytime I get to commiting some values, an "Object reference not set to an instance of an object." Exception pops up, with a stack trace pointing to the UniqueFieldValueConstraint. Also, when I comment out the two lines after the "Initialize Indexes" comment, everything runs fine (Except you can save non-unique keys, which is a problem)~ Commit code (In case I'm doing something wrong in this part too:) public static void Create(string key, string value) { try { db.Store(new DAObs.Environment() { Key = key, Value = value }); db.Commit(); } catch (Db4objects.Db4o.Events.EventException ex) { System.Console.WriteLine (DateTime.Now + " :: Environment.Create\n" + ex.InnerException.Message +"\n" + ex.InnerException.StackTrace); db.Rollback(); } } Help please? Thanks in advance~

    Read the article

  • Apache commons HTTPClient and log4j.xml

    - by java_pill
    I'm using Apache commons HTTPClient with Apache Axis 1.5 and I'm trying to log the messages exchanged when making Web Service calls by enabling org.apache.commons.httpclient to DEBUG and httpclient.wire to DEBUG. However, this doesn't work. Mentioned below is my log4j.xml - can someone help me? Thanks <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE log4j:configuration SYSTEM "log4j.dtd"> <log4j:configuration xmlns:log4j="http://jakarta.apache.org/log4j/"> <appender name="rolling" class="org.apache.log4j.DailyRollingFileAppender"> <param name="File" value="test.log" /> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d [%t] %-5p %c:%L - %m%n"/> </layout> </appender> <logger name="org.apache.commons.httpclient"> <level value="DEBUG"/> </logger> <logger name="httpclient.wire"> <level value="DEBUG"/> </logger> <root> <level value="DEBUG" /> <appender-ref ref="rolling"/> </root> </log4j:configuration>

    Read the article

  • Copy task in Visual Studio 2008

    - by Maurizio Reginelli
    This is a question related to a previous question I posted (see here). I need to configure a .csproj file to copy some files from a directory to another one (let me call them SOURCE and DESTINATION). I also need to change the SOURCE path depending from the configuration I'm using. For example, if I compile my project in Debug mode, the SOURCE path must contain a subfolder called Debug. I tried the solution proposed by Schmitt in the previous post, using $(ConfigurationName) to set the dynamic directory into the SOURCE path. When I opened the solution containing that project, a list of links to the source files appeared in the main tree of the project and they were correctly related to the Debug mode. But when I changed to the Release mode, I saw that the path of the linked source files were set again to the Debug version. Is there a way to specify a parameter in the Include attribute of the SourceFiles element? Thank you.

    Read the article

< Previous Page | 564 565 566 567 568 569 570 571 572 573 574 575  | Next Page >