Search Results

Search found 10285 results on 412 pages for 'enterprise repository'.

Page 347/412 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Removing SCIM input method as default from gnome terminal

    - by Mark
    Hello - I am recently back into the Linux world after about a 10 year absence. While I can find my way around most things, terminals and desktop managers are different than I remember. One of the biggest problems that I am encountering today is that when running a gnome terminal (this is Suse 10.0 enterprise), I'm getting behavior in the window that I don't want. Specifically, when I type, my typing is underlined as if something is trying to spell check my window. Further, it seems as if when running vi or less, my keystrokes are only processed by these apps when I hit 'return'. I.e. if I'm running less and want to go back a page, I'll hit b, but nothing happens until I hit 'return'. I seem to have tracked this down to the 'input method". Right clicking in the Gnome terminal allows me to set my input method to one of a dozen values. It seems that currently, it's set to "SCIM Input Method". If I then select 'default' or 'X Input Method', apps (i.e. things like less, vi, and even the bash shell) behave as I would expect. Can someone tell me a) what is this SCIM input method b) how can I make it so that it is not the default? I've poked around various configuration files in my home directory as well as in /etc, but I can't see to find how this is set. I guess as a final question, can I just get rid of SCIM? Or is that tied into the window manager somehow? I do appreciate any clarifications that I can get. Thanks.

    Read the article

  • Active Directory LDAP and user issues (using apache2 for svn access)

    - by CaCl
    I currently have a setup where I work that lets users use their active directory domain logins and passwords to authenticate and authorize access to Subversion. Currently I need to allow application accounts the same access. So our IT group creates application accounts in the active directory for us to use. But they want to be "secure" so they set the "Workstations Allowed" to be only a limited number of workstations. So when an application account hits the apache2 server for authentication they can't login for some reason and I'm having a heck of a time trying to debug. The error logs only show me: [Tue Apr 06 11:24:25 2010] [warn] [client 24.24.24.24] [3469] auth_ldap authenticate: user appuser13 authentication failed; URI /svn [ldap_simple_bind_s() to check user credentials failed][Invalid credentials] [Tue Apr 06 11:24:25 2010] [error] [client 24.24.24.24] user appuser13: authentication failure for "/svn": Password Mismatch I've checked the password numerous times and it appears to be correct but I can't seem to get the user to authenticate properly. Below is a snippet of the apache configuration for ldap: # Auth providers # Active Directory <AuthnProviderAlias ldap ldap1> AuthBasicProvider ldap AuthLDAPURL "ldap://dmain.company.com:389/dc=dmain,dc=company,dc=com?sAMAccountName?sub?(objectClass=*)" AuthLDAPBindDN "CN=svnuser13,OU=Application Accounts,dc=dmain,dc=teradata,dc=com" AuthLDAPBindPassword secret3 </AuthnProviderAlias> # Another set of users from a different group <AuthnProviderAlias ldap ldap2> AuthBasicProvider ldap AuthLDAPURL ldap://diffldapserver:389/dc=specialusers,dc=com?uid </AuthnProviderAlias> # Another set of users from a different group <AuthnProviderAlias file file1> AuthUserFile /var/svn/auth/htpasswd </AuthnProviderAlias> <Location /svn> DAV svn SVNPath /var/svn Satisfy Any Require valid-user AuthType Basic AuthName "SVN Repository" AuthBasicProvider ldap1 file1 ldap2 AuthzSVNAccessFile /var/svn/auth/access AuthzLDAPAuthoritative on Require valid-user </Location> Any help, like tips for debugging is appreciated!

    Read the article

  • Using 1920x1200 mode on SyncMaster T260HD in Linux

    - by dagorym
    I just got a Samsung SyncMaster T260HD monitor. It works straight out of the box with Windows but I can't seem to get it to work with Linux, which is my primary OS for day to day work. The computer boots up but when going into graphical mode on Linux the monitor gives me a "Mode not supported" error and doesn't display anything. I booted up windows and, using PowerStrip, grabbed the exact ModeLine that should be used to get the equivalent setting in Linux and added it to my xorg config file but it doesn't seem to help. the ModeLine is: ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync This is the modeline for the working display settings in windows but it doesn't seem to work in Linux My complete entry in the xorg.conf file for the monitor is Section "Monitor" Identifier "Monitor0" ModelName "SyncMaster" DisplaySize 518 324 HorizSync 30.0 - 81.0 VertRefresh 56.0 - 75.0 Option "dpms" ModeLine "1920x1200" 153.9 1920 1984 2016 2080 1200 1203 1209 1235 +hsync -vsync EndSection I'm running Scientific Linux 5.4 (clone of Redhat Enterprise Linux 5.4) but I've tried booting with a recent Linux Mint Distro as well as Ubuntu 9.04 and had the same problem. Any suggestions on other things I should try or might be missing? If anyone's gotten this to work I'd love to know. Thanks.

    Read the article

  • Issues with duplicating TFS 2005 to a virtual server

    - by Hitchhiker
    We have a problem with our current TFS installation. For some reason, which I won't get into, the Sharepoint DBs (sts_content, sts_config) were upgraded to WSS 3 (MOSS). So now, none of our team-project sites work, we have no access to our documents and can't create new team projects. We can still work with the version control, though. We wanted to "play" with the server and try to fix it, without affecting the users. So we used p2v for VmWare, to duplicate the server to a virtual one. We now continued and changed all of the relevant configuration to point to the new server, as explained in the MSDN article. The step of rebuilding the warehouse (with setupwarehouse) failed. Also, we can't access the VersionControl web service (ourserver:8080/VersionControl/v1.0/repository.asmx). We are seeing errors in the EventLog: TF53002: Unable to obtain registration data for application Build. TF53005: Unable to retrieve the Team Foundation Server installed UI culture. TF53002: Unable to obtain registration data for application VersionControl. TF30040: The database is not correctly configured. Contact your Team Foundation Server administrator. The solutions suggested in this blog post did not assist. So now we're kind of stuck. Any assistance will be appreciated.

    Read the article

  • kickstart ks.cfg: Where should 'url' point?

    - by Stefan Lasiewski
    I have a kickstart file (ks.cfg) on a floppy (Old style). I am trying to install CentOS 5.4. The top of my ks.cfg says this: install # Install from local cdrom or over the network. #cdrom url --url http://kickstart.example.org/pub/centos/5.4/ On the Apache server side, this command is failing with these 404s: kickstart.example.org 192.168.16.180 - - [01/Jun/2010:17:24:30 -0700] "GET /pub/centos/5.4///disc1/.discinfo HTTP/1.1" 404 314 "-" "urlgrabber/3.1.0" kickstart.example.org 192.168.16.180 - - [01/Jun/2010:17:24:43 -0700] "GET /pub/centos/5.4/repodata/repomd.xml HTTP/1.1" 404 316 "-" "urlgrabber/3.1.0 yum/3.2.22" It seems that the value of my url doesn't match the directory structure on the server. I swear this worked a few months ago. Someone else maintains the Yum repository, and they say nothing has changed. What should the value of url URL be? Should this only include the OS (/pub/centos/5.4/), or should it include the architecture (/pub/centos/5.4/os/x86_64 )? I see that Kickstart is trying to grab a file called 'repomd.xml', but why is it looking in '/pub/centos/5.4/repodata/repomd.xml', when these files actually exist at '/pub/centos/5.4/os/x86_64/repodata/repomd.xml' and other locations at '/pub/centos/5.4/*/$ARCH/repodata/repomd.xml'? I don't see this documented or explained well in the [RedHat 5 Installation Guide1]

    Read the article

  • Network Explorer Intermittently Fails to Display all Computers in Work Group

    - by graf_ignotiev
    I run a small computer lab of 10 computers and occasionally, when using the network explorer (a.k.a Network Browser) some or all of the remote computers will fail to appear. If I try to access a remote computer by its name I get an unspecified error (code 0x80004005), but I am still able to access it with the computer's IP address. The strangest part is that the problem will inexplicably go away after waiting awhile. Each computer is running Windows 7 x64 Enterprise and has identical hardware, software and configuration. They are all on the same subnet and in the same workgroup. I've spent days researching the problem and have tried the following solutions: Updated the BIOS, chipset and network adapter drivers Changed Power Settings in Network Adapter Properties so that the computer will not turn it off Disabled the Computer Browser service Changed the DHCP node type to broadcast Reviewed the Event Viewer logs Steps 3 and 4 have seemed to help the problem a little bit, but not completely. I'm beginning to suspect that the problem might lie with our router which is a ZyXEL ZyWALL 2WG, as the packets sent by Network Discovery may not be returning in time, but I wanted to get some perspective in the issue before I went any further.

    Read the article

  • AuthBasicProvider: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand?

    Read the article

  • Name resolution not working with ipv6 on centos

    - by jolivier
    I just installed CentOs 6.3 on a server to be installed in a data center, but cannot get name resolution / curl to work. I know this is because of it trying to use ipv6, since ping google.com works, curl -4 google.com works, but not curl google.com. I removed the ipv6 adress from the interface and it does not change anything. This is very problematic since most system tools like yum fail at name resolution currently. Browsers like Firefox work because they might be using another tool for name resolution than the one use by curl. I managed to fix this on workstations by completely disabling ipv6 following tutorials like this one / hardcoding name resolution in /etc/hosts. But since I am here configuring a server which will be later installed in a remote data center, I would like not to mess up, understand what is going on and fix it properly. Besides, I will face the same issue with more servers to come so I would really appreciate your help in understanding this problem and how to solve it. I would be happy to provide more information if needed to help understand what is going on. The current network configuration is a small enterprise network, with a DNS server (let's call it A) configured once a long time ago. dig google.com and dig -4 google.com are both refused by the A DNS. But this is also true for my workstation on which curl is working (and yes they both use the same A DNS server). Indeed this faulty server and my workstation have multiple nameservers in /etc/resolv.conf, and the second one is working fine for both of them, so if I remove A from my resolv.conf everything works fine! Regards, Olivier

    Read the article

  • Nginx proxy with Redmine SVN authentication.

    - by Omegaice
    I am attempting to setup a system where I have an nginx server running as a reverse proxy for multiple websites that I want to run. To separate the websites I have created a Linux container which contains each site to allow me to reduce conflicts in database usage etc. I am currently trying to get my main site working and have nginx with passenger setup and connecting to redmine and I have an Apache install specifically setup for serving the SVN over HTTP and am attempting to use the redmine authentication with that. I have set everything up as described in the redmine howtos, but when I check a project out from the SVN it always works even if the project is private and whenever I try and commit to the repositories it fails saying "Could not open the requested SVN filesystem", the Apache error log related to that event is "(20014)Internal error: Can't open file '/srv/rcs/svn/error/format': No such file or directory". If I take out the redmine authentication I can checkout and check-in repositories fine but there is no authentication. Does anyone have any ideas? Edit I tried to solve this problem another way by attempting to have the authentication work by LDAP, I managed to get it so that my user could log into the redmine website but as soon as I tried to check anything out it said that access was forbidden to the repository.

    Read the article

  • New router messed up server 2003 setup...

    - by Aceth
    Hey, We were sent a new 2wire router today configured it as best we can to match the old bt voyager. We've also got X static IP's. We've manage to get our webserver on one of the new IP's public facing. then we use a hardware firewall which is in a DMZ again with a different static IP. This firewall then is our gateway for our internal LAN. with a few servers etc. The problem we're having is only our PDC (primary Domain controller which has exchange 2003 on) can't ping externally even an external IP. We've connected laptops to the 2wire router and obtain a private ip 192.168.1.X and it works fine can ping etc. our other servers with an internal ip behind the firewall can ping out fine. We've connected to the firewalls logging console and the pings from the server are allowed through so its fine there. The server in question is a Windows server 2003 R2 Enterprise SP2 + Exchange 2003 Server doesn't have firewall turned on. it has static private IP .. gateway is pointing to the right one External Static IP is routing fine inwards We've ran out of ideas .. help??

    Read the article

  • Hard drive not correctly recognized on a new Windows 7 installation, but works correctly on Windows XP

    - by david
    I'm having problems configuring a hard disk in a brand new, clean Windows 7 installation. System specs: Hard disk: WD VelociRaptor WD6000HLHX (600 GB, 10000 RPM) Motherboard: Gigabyte Z77X-UD3H BIOS SATA mode set to AHCI (not RAID), with disk connected to SATA0 (6 Gb/s port). Windows 7 Enterprise SP1 64-bit The disk is recognized by the BIOS and is correctly identified, with the name and size correctly reported. Windows recognizes the disk itself and reports the device is functioning correctly, but it doesn't appear in Explorer. Disk Management shows the drive, but incorrectly states that it is uninitialized and has no partitions. If I try to initialize the drive, I get an error saying that "the system cannot find the file specified" (what file?). Before connecting the drive to the new machine, I partitioned and formatted it under Windows XP SP2, creating 2 partitions (MBR, not GPT) and copying over a boatload of data. However, none of this data appears under Windows 7. If I put the disk back into the Windows XP machine, I can access the disk and all of its data. Is it possible to get Windows 7 to correctly recognize the disk without having to erase it and start over? If so, how do I do so? I checked this question, which seems to cover the same issue, but it didn't help.

    Read the article

  • Dash (-) in directory listing

    - by Mazzy
    I've Googled around for this to no avail, I'm sure its just something simple but I have not been able to figure this out perhaps because searching in Google or SF for a "-" can be problematic. I had a strange directory listing show up the other day in my git repository within Drupal. Listing my sites directory looks like this: -sh-4.1$ ls -al total 52 drwxr-xr-x 5 (hide) (hide) 4096 Dec 6 16:15 . drwxr-xr-x 24 (hide) (hide) 4096 Dec 11 16:22 .. -rw-rw-r-- 1 (hide) (hide) 24271 Dec 6 15:57 – drwxrwxr-x 4 (hide) (hide) 4096 Sep 17 11:53 all drwxr-xr-x 3 (hide) (hide) 4096 Sep 17 11:54 default drwxrwxr-x 8 (hide) (hide) 4096 Dec 11 17:40 .git -rw-rw-r-- 1 (hide) (hide) 476 Sep 17 11:53 .gitignore -rw-rw-r-- 1 (hide) (hide) 81 Sep 17 11:53 README.md This "-" file cannot be opened and does not appear to be a symlink, although when I execute "cd -" I get this: -sh-4.1$ cd - /home/sites/dev1.(hide).com That is coincidentally or not the users home directory, and the site's root directory. The other strange this is this entry does not show up for any other user browsing this same directory. Nor does it show up for other users period in their Git directories. The entry cannot be removed via RM. Running Centos 6.2 by the way...

    Read the article

  • Backup with Mercurial and Robocopy?

    - by Andrew Neely
    The Problem We would like to backup our critical files from several network shares to a removable hard drive. We want to automate the backup so we don't have to remember to run it. It needs to finish overnight. Furthermore, we want to be able to preserve multiple versions of each file so we can back out of our user's mistakes easier. Background Information I work in a large Windows-based enterprise with a centralized IT section who is responsible for all backups. Their backups are geared towards disaster recovery and not user error, and require upper-level management approval for any non-disaster recoveries. Several times we have noticed that our backups have failed, we weren't notified. I do not have administrative rights to the server or my desktop. We are trying to backup some 198,000 files spanning about 240 gigabytes. These files rarely change. Our backup drive is one terabyte. My Proposed Solution What I would like to do is to write a batch file using Robocopy with the /mir option along with Mercurial SCM to store all versions of the file. I would do an hg add followed by an hg commit before each execution of Robocopy to save the current state, and then make a mirrored copy of the file structures. The problem is the /mir will delete every folder not present in the source, and Mercurial stores the repository in a .hg folder in the destination folder. Does anybody know how I could either convince Mercurial to store the .hg folder elsewhere, or convince Robocopy not to delete it from the destination? I'm trying to avoid writing a custom program do to copying.

    Read the article

  • Virtual (ESXi4) Win 2k8 R2 server hangs when adding role(s)

    - by Holocryptic
    I'm trying to provision a 2k8r2 Enterprise server in ESXi4. The OS installation goes fine, VMware tools, adding to domain, updates. All the basic stuff before you start adding Roles and Features. I've had this happen on two attempts already, and I'm not sure where the problem might be. I don't think it's hardware, because I have another 2k8r2 Standard server that's running fine. The only real difference is the install media. The server that's working was installed using a trial ISO and license. The one I'm having problems with is a full MAK installation. When I go to add a Role (the last case was Application Server) it gets all the way to "collecting installation results" before it hangs. CPU utilization in the vSphere client shows little spikes of activity with flatlines inbetween, but the whole console is locked up. The only way to release it is to power off and bring it back up. When you go to look at the added roles after bringing it back up, it shows that it is installed, but I don't trust that something didn't get wedged in all of that. The first install I did was with Thin Disk provisioning. The second attempt was with regular disk provisioning. In both cases 4GB of RAM, 2 vCPUs. VMware host is a HP Proliant DL380 G6, RAID-1 OS, RAID-5 data volume. 12 GB RAM. Has anyone else had this problem, or know where I should start poking around?

    Read the article

  • Coldfusion 9 losing connection to MySQL 5 database server a couple of weeks after the server is started

    - by user1503757
    We get the following Coldfusion error message after our server have been running for a couple of weeks: Error Executing Database Query.Could not create connection to database server. Attempted reconnect 3 times We run Coldfusion Enterprise 9 on a one year old XServer with Snow Leopard and MySQL 5 The server has about ten DSN set up in the Coldfusion Administrator All local, with default advanced settings, and host set to "localhost" The server is not under heavy load. The strange thing is that after a restart of the server, everything works fine. Then after a week or so, some databases will stop working, in the sense that Coldfusion cannot create a connection to them. If I then go to the Coldfusion Administrator and click "Verify all datasources", I will get that only 2 or 3 got verified, the other ones failed, and it is always the same datasources that can't be verified when the server starts to behave like this if I try to verify again, BUT NOT neccessary the same datasources that couldn't be verified the last time the server behaved like this. I know about the setting "max_connections" and we have included a line for that setting in the MySQL config file and set it to 2000, and when we read it by a query it says "2000", so that can't be the problem. Anyone?

    Read the article

  • Can't Install msysgit/tortoisegit

    - by Jay
    I ran msysGit-netinstall-1.7.0.2-preview20100407-2.exe.   (http://code.google.com/p/msysgit/downloads/list) Then I ran TortoiseGit-1.4.4.0-64bit.msi.   (http://code.google.com/p/tortoisegit/downloads/list) msysgit was installed in C:\ TortioseGit appears to have been installed in C:\Program Files\TortoiseGit I have: "Git Clone..." "Git Create repository here" "TortoiseGit" in Explorer context menu. When I try to clone, I get "git have not installed" [sic]. I have tried setting the MSysGit path, in the TortioseGit settings, to everything imaginable. Nothing works. Neither C:\Program Files or C:\Program Files (x86) have a Git folder. The git command gives "command not found" from both cmd.exe and bash (that msysgit installed) I don't not see msysgit in - Control Panel - Programs - Program Features, but I do see TortioseGit in there. I would like a procedure for verifying that msysgit is properly installed. A procedure for uninstalling msysgit would be an added bonus. I would like a procedure for getting TortoiseGit to work. I am running Windows 7 on a MacBook Pro.

    Read the article

  • Word 2013 can't compare readonly files

    - by Moshe Katz
    I am using Tortoise SVN to work with a repository that contains some documentation saved as Word documents. On my old computer, with Office 2010, I was able to compare with previous revisions. Tortoise would open Word in compare view so I could see the differences between the files. I have installed Office 2013 (final version from Technet, not the preview version) on my new laptop for testing and now I can no longer compare Word Documents. Tortoise pops up a generic error that it was unable to compare the two files. Tortoise uses a JScript file to interface with Word, so I ran that file through a debugger and found that the actual error is: The Compare method or property is not available because this command is not available for reading. Some Googling followed by some testing revealed that the error is caused by the first file opened (in this case, the previous version) being opened as Read-Only. If I change the JScript code to open in normal mode, and I find the file on the system and un-check the "Read Only" property (if necessary), then the comparison opens as expected. I was unable to find any documentation about this change to Word on any Microsoft site. Does anyone know why this has been changed, and if it is intentional and not a bug, what the benefit is of requiring the file to be writable in order to compare it with another? Note: This is tagged word-2013-preview but it is actually for the release version of Word that is available on MSDN and Technet. I do not have enough rep. on this site to create new tags (yet).

    Read the article

  • Failover strategy for a 4 servers scenario

    - by Joao Villa-Lobos
    Hi all, I am trying to figure out how to set up replication & failover in a scenario with 4 servers (2 per location) where any server may assume the Master role. My initial scenario is the following one: 2 servers in location A (One Master, One Slave); 2 servers in location B (Two Slaves). For this I'm thinking on using the configuration Master-Master Active-Passive suggested on O'Reilly's "High Performance MySQL" on all of them so each one can become a Master when needed. If the Master "dies" the other server from location A assumes the Master role whenever possible. It will always have a bigger priority then the servers on location B. A server on location B will only switch to Master if no server on location A is able to do so. Since MySQL can't handle this automatically I need some other way to implement this. I've read already about heartbeat and Maatkit. Is this the way to go? Has anyone used this in a similar scenario? Is there some other way to go in order to achieve this? Any pointers about failout will be appreciated. I want to keep this as simple as possible avoiding stuff such as DRDB. I'm not concerned about high availability just a way to switch roles automatically without too many hassle. I'm using SuSe Enterprise 10 and MySQL 5.1.30-community. Thanks in advance, João

    Read the article

  • Set up server at home for running svn and Bugzilla

    - by erikric
    Hi, I'm a fairly experienced developer, but when it comes to servers and network related stuff, I'm pretty green. We are developing a web site, and I would like to set up a server that can host my SubVersion repository, and also host Bugzilla for when we release a testversion on some users. So what do i need to accomplish this? I have an old computer that can be used. Can I run this on any OS? It currently has Win 7 installed, but I was thinking about going for Ubuntu instead. Any reason to go for one or the other? I guess I need a web server, and I guess Apache will do fine. Do I need something else to let my computer be available from all over the web, or is a web server and a standard internet connection all it takes? A link some good online tutorials would be much appreciated. And then I mean for real dummies ;). I usually find pages that try to explain setting up servers going way over my head.

    Read the article

  • Can't install PHP after apt-get dist-upgrade

    - by WASD42
    I had a server with perfectly running for months classical LAMP installation on Ubuntu 8.04: Linux localhost 2.6.24-23-generic #1 SMP Wed Apr 1 21:47:28 UTC 2009 i686 GNU/Linux DISTRIB_ID=Ubuntu DISTRIB_RELEASE=8.04 DISTRIB_CODENAME=hardy DISTRIB_DESCRIPTION="Ubuntu 8.04.4 LTS" Don't know why I've started apt-get update, apt-get upgrade but everything ended with apt-get dist-upgrade :) Everything gone alright... But now I can't start nor Apache, nor PHP, because PHP was simply deleted. When I'm trying to install it: > apt-get install php5 <...> The following packages have unmet dependencies: php5: Depends: libapache2-mod-php5 (>= 5.2.4-2ubuntu5.17) but it is not going to be installed or php5-cgi (>= 5.2.4-2ubuntu5.17) but it is not going to be installed E: Broken packages When I'm trying to install libapache2-mod-php5: The following packages have unmet dependencies: libapache2-mod-php5: Depends: php5-common (= 5.2.4-2ubuntu5.17) but 5.3.6-6~dotdeb.1 is to be installed E: Broken packages I don't know what 5.3.6-6~dotdeb.1 is and where is this package, because I've already removed dotdeb repository from APT sources :/ Tried to do apt-get update, apt-get upgrade, apt-get install php5 php5-common php5-cli with no success... Don't know what to try next :(

    Read the article

  • Hyper-V 2008 R2 synthetic networking stops working with linux 2.6.32.15

    - by luxifer
    Hi there, so I thought I'd give Hyper-V on Windows Server 2008 R2 Enterprise a try on my Homeserver (yes, it's legit... got it from msdnaa). First thing to throw at it was my firewall which runs IPFire. This distribution currently uses the kernel version 2.6.32.15 and comes with the Hyper-V drivers. So I enabled them and at first they work just fine but after a few minutes they just fail. There are no packages going in or out anymore until I reboot the VM but sometimes even that won't work so the VM just keeps "Stopping" like forever. Emulated networking works fine but it slow and uses more CPU. That way my firewall routes slower than when running under virtualbox on an atom N270. My server has an E6750; VM is limited to 25%, but that should still outperform this atom CPU especially since it's never going anywhere near 100% CPU load, so give me a break! A quick google search led me to people having the same problem (even with other distributions and kernel versions that include those drivers) but no solution yet... I already found this but I can't quite follow the author on the part where he solved the issue - especially since I need two virtual nics for my firewall distro to work (obviously one internal and one external) What am I missing here?

    Read the article

  • Why I am getting "Problem loading the page" after enabling HTTPS for Apache on Windows 7?

    - by Anish
    I enabled HTTPS on the Apache server (2.2.15) Windows 7 Enterprise by uncommenting: Include /private/etc/apache2/extra/httpd-ssl.conf in C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd.conf and modifying C:\Program Files (x86)\Apache Software Foundation\Apache2.2\conf\httpd-ssl.conf to include: DocumentRoot "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/htdocs" ServerName myserver.com:443 ServerAdmin [email protected] ... SSLCertificateFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem SSLCertificateKeyFile "SSLCertificateFile "C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/key.pem" Then I restart apache (going to start-All Progranms-Apache Server 2.2-Control-restart) and go to localhost on port 443 in Firefox , where I get: Index of / Index of / Links/ ..... .... But on Display of WebPage I see: Unable to connect Firefox can't establish a connection to the server at localhost. *The site could be temporarily unavailable or too busy. Try again in a few moments. *If you are unable to load any pages, check your computer's network onnection. *If your computer or network is protected by a firewall or proxy, make sure that Firefox is permitted to access the Web. I read: Why am I getting 403 Forbidden after enabling HTTPS for Apache on Mac OS X? and added default web server configuration block to match my DocumentRoot The error Log C:\Program Files (x86)\Apache Software Foundation\Apache2.2\logs\error.log gives following error: The Apache2.2 service is running. (OS 5)Access is denied. : Init: Can't open server certificate file C:/Program Files (x86)/Apache Software Foundation/Apache2.2/conf/cert.pem I checked the permissions for cert.pem and it indicates: All the permissions (Full control, Read, Read and modify, execute, Write) are marked for Admin and I am currently logged in as Admin. I tried using oldcert.pem and oldkey.pem on the same server and it works fine. Is there anything that I missed?

    Read the article

  • Interrupted system call during "hg convert"

    - by Aaron Digulla
    When I run "hg convert" to convert a Subversion repository to Mercurial, I get this error: fetching revision log for "/trunk" from 1538 to 0 run hg sink post-conversion action Traceback (most recent call last): File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 46, in _runcatch return _dispatch(ui, args) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 454, in _dispatch return runcommand(lui, repo, cmd, fullargs, ui, options, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 324, in runcommand ret = _runcommand(ui, options, cmd, d) File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 505, in _runcommand return checkargs() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 459, in checkargs return cmdfunc() File "/usr/lib/pymodules/python2.6/mercurial/dispatch.py", line 453, in <lambda> d = lambda: util.checksignature(func)(ui, *args, **cmdoptions) File "/usr/lib/pymodules/python2.6/mercurial/util.py", line 386, in check return func(*args, **kwargs) File "/usr/lib/pymodules/python2.6/hgext/convert/__init__.py", line 229, in convert return convcmd.convert(ui, src, dest, revmapfile, **opts) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 398, in convert c.convert(sortmode) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 312, in convert parents = self.walktree(heads) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 109, in walktree commit = self.cachecommit(n) File "/usr/lib/pymodules/python2.6/hgext/convert/convcmd.py", line 267, in cachecommit commit = self.source.getcommit(rev) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 433, in getcommit self._fetch_revisions(revnum, stop) File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 814, in _fetch_revisions for entry in stream: File "/usr/lib/pymodules/python2.6/hgext/convert/subversion.py", line 122, in __iter__ entry = pickle.load(self._stdout) IOError: [Errno 4] Interrupted system call abort: Interrupted system call Apparently, it is possible to restart a read on EINTR but how would I do that with pickle.load()? Also I wonder where that signal comes from? I suspect it's SIGCHILD but shouldn't popen() handle that?

    Read the article

  • 403 Forbidden When Using AuthzSVNAccessFile

    - by David Osborn
    I've had a nicely functioning svn server running on windows that uses Apache for access. In the original setup every user had access to all repositories, but I recently needed the ability to grant a user only access to one repository. I uncommented the AuthzSVNAccessFile line in my httpd.conf file and pointed it to an accessfile and setup the access file, but I get a 403 Forbidden when I go to mydomain.com/svn . If I recomment out this line then things work again. I also made sure I uncommented the LoadModule authz_svn_module and verified that it was point to the correct file. Below is the Location section of my httpd.conf and my svnaccessfile httpd.conf (location section only) <Location /svn> DAV svn SVNParentPath C:\svn SVNListParentPath on AuthType Basic AuthName "Subversion repositories" AuthUserFile passwd Require valid-user AuthzSVNAccessFile svnaccessfile </Location> (I want a more complex policy in the long run but just did this to test the file out) svnaccessfile [svn:/] * = rw I have also tried just the below for the svnaccessfile. [/] * = rw I also restart the service after each change just to make sure it is taken.

    Read the article

  • Setting SVN permissions with Dav SVN Authz

    - by Ken
    There seems to be a path inheritance issue which is boggling me over access restrictions. For instance, if I grant rw access one group/user, and wish to restrict it some /../../secret to none, it promptly spits in my face. Here is an example of what I'm trying to achieve in dav_svn.authz [groups] grp_W = a, b, c, g grp_X = a, d, f, e grp_Y = a, e, [/] * = @grp_Y = rw [somerepo1:/projectPot] @grp_W = rw [somerepo2:/projectKettle] @grp_X = rw What is expected: grp_Y has rw access to all repositories, while grp_W and grp_X only have access to their respective repositories. What occurs: grp_Y has access to all repositories, while grp_W and grp_X have access to nothing If I flip the access ordering where I give everyone access and restrict it in each repository, it promply ignores the invalidation rule (stripping of rights) and gives everyone the access granted at the root level. Forgoing groups, it performs the same with user specific provisions; even fully defined such as: [/] a = rw b = c = d = e = f = g = rw [somerepo1:/projectPot] a = rw b = rw c = rw d = e = rw f = g = rw [somerepo2:/projectKettle] a = rw b c d = rw e = rw f = rw g Which yields the exact same result. According to the documentation I'm following all protocols so this is insane. Running on Apache2 with dav_svn

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >