Search Results

Search found 68825 results on 2753 pages for 'problem'.

Page 733/2753 | < Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

  • Command line solution for removing parts from a binary file?

    - by zsero
    I have a binary file and I would like to remove parts from. By removing I mean deleting those parts and thus making the file's size smaller. The parts would be between two ASCII strings. So, for example the file would look like this ........ start ABCD end ..... start EFGH end ..... start IJKL end ........... So in this file, I would like to search for strings "start" and "end" and remove the parts between them. The way I think I can do it is to lookup all the locations for "start" and "end" calculate ranges from that delete those parts Now I am using some GUI based Hex editor and I use the "Search All", "Select Range" and "Delete" commands, but I am sure it would be possible to solve it using some powerful command line hex/text editors. Do you know any solution for this problem which doesn't require using a GUI for looking up, copy & paste on clipboard, select range and delete commands but is just a few lines of command line? I am interested ini both Linux shell scripts or using some command line hex editors under Windows, or even Python scrips are welcome. Do you think it is possible to solve this problem just by a simple Regex replace? Are there any regex replace util which handles binary files well?

    Read the article

  • Chrome's new window disappear from desktop

    - by faulty
    I'm having this problem with Chrome on a Windows 7 x86 machine. I'm using Chrome for a locally hosted intranet web app. In the web app, there's frequent use of new window which popup, which works like a dialog box for the app. The problem started since yesterday. All the new windows doesn't appear on top and no where to be found on the desktop. But it does appear on the taskbar. Clicking the button on the taskbar doesn't bring up the window as well. I've also tried Alt+Space, then Move, it doesn't work. I only found 2 way to bring it up, Maximize or Show as tab. What could have caused this and how should I fix it? Thanks EDIT: I've configure to not group taskbar buttons. EDIT: I've just gone through the javascript of the button, it call window.open to display the popup. I can see the popup flicker and disappear. If I step through the code, the popup will display properly, sometime. Also, if I click the button twice, it will also display the popup, but at a much smaller size, less than 100x100

    Read the article

  • Mac: window manager frozen, have ssh access

    - by Bernd
    I have a Mac which regularly runs into a problem. The user interface stops reponding, showing a "frozen" user interface. The mouse is still moving but clicking does not trigger anything. This happens about once a week. Solution so far is to force switch-off the Mac and reboot it. I have ssh root access to the Mac. Killing (kill -9) the active application has no visible impact on what is shown on the screen. Any ideas on how to diagnose this? Is there a way to restart the window manager from the ssh shell? Killing /System/Library/Frameworks/ApplicationServices.framework/Frameworks/CoreGraphics.framework/Resources/WindowServer seems not to be possible. The Mac is an early 2008 iMac and runs Lion with latest updates. /Library/Logs/DiagnosticReports is empty. Update: Problem stays after update to Mountain Lion. The WindowServer process is in "uninterruptible wait" state ("U" flag in ps output set): imac:~ root# ps ax|awk "NR==1|| /WindowServer/"|grep -v awk PID TT STAT TIME COMMAND 86 ?? Us 50:51.69 /System/Library/Frameworks/ApplicationServices.framework/Frameworks/CoreGraphics.framework/Resources/WindowServer -daemon Any idea for diagnosing what blocks the process? Any idea for "waking up" the process?

    Read the article

  • when i try to access website without www. i get access denied.

    - by madphp
    I have an apache web server on a debian machine. Im using virtualmin to administer virtual hosts. I have two sites on this server right now, when i try to access one site without the www in the URL i get an access denied. The other site is fine. The site with the problem is a cakephp app and has the following .htaccess file in the public_html folder. <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> Below is the directives for the problem domain. SuexecUserGroup "#1001" "#1001" ServerName mydomain.net ServerAlias www.mydomain.net ServerAlias webmail.mydomain.net ServerAlias admin.mydomain.net DocumentRoot /home/mydomain/public_html ErrorLog /var/log/virtualmin/mydomain.net_error_log CustomLog /var/log/virtualmin/mydomain.net_access_log combined ScriptAlias /cgi-bin/ /home/mydomain/cgi-bin/ ScriptAlias /awstats/ /home/mydomain/cgi-bin/ DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory /home/mydomain/public_html> Options -Indexes +IncludesNOEXEC +FollowSymLinks +ExecCGI allow from all AllowOverride All AddHandler fcgid-script .php AddHandler fcgid-script .php5 FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php FCGIWrapper /home/mydomain/fcgi-bin/php5.fcgi .php5 </Directory> <Directory /home/mydomain/cgi-bin> allow from all </Directory> RewriteEngine on RewriteCond %{HTTP_HOST} =webmail.mydomain.net RewriteRule ^(.*) https://mydomain.net:20000/ [R] RewriteCond %{HTTP_HOST} =admin.mydomain.net RewriteRule ^(.*) https://mydomain.net:10000/ [R] RemoveHandler .php RemoveHandler .php5 IPCCommTimeout 31 <Files awstats.pl> AuthName "mydomain.net statistics" AuthType Basic AuthUserFile /home/mydomain/.awstats-htpasswd require valid-user </Files>

    Read the article

  • Slow performance of MySQL database on one server and fast on another one, with similar configurations

    - by Alon_A
    We have a web application that run on two servers of GoDaddy. We experince slow preformance on our production server, although it has stronger hardware then the testing one, and it is dedicated. I'll start with the configurations. Testing: CentOS Linux 5.8, Linux 2.6.18-028stab101.1 on i686 Intel(R) Xeon(R) CPU L5609 @ 1.87GHz, 8 cores 60 GB total, 6.03 GB used Apache/2.2.3 (CentOS) MySQL 5.5.21-log PHP Version 5.3.15 Production: CentOS Linux 6.2, Linux 2.6.18-028stab101.1 on x86_64 Intel(R) Xeon(R) CPU L5410 @ 2.33GHz, 8 cores 120 GB total, 2.12 GB used Apache/2.2.15 (CentOS) MySQL 5.5.27-log - MySQL Community Server (GPL) by Remi PHP Version 5.3.15 We are running the same code on both servers. The Problem We have some function that executes ~30000 PDO-exec commands. On our testing server it takes about 1.5-2 minutes to complete and our production server it can take more then 15 minutes to complete. As you can see here, from qcachegrind: Researching the problem, we've checked the live graphs on phpMyAdmin and discovered that the MySQL server on our testing server was preforming at steady level of 1000 execution statements per 2 seconds, while the slow production MySQL server was only 250 executions statements per 2 seconds and not steady at all, jumping from 0 to 250 every seconds. You can clearly see it in the graphs: Testing server: Production server: You can see here the comparison between both of the configuration of the MySQL servers.Left is the fast testing and right is the slow production. The differences are highlighted, but I cant find anything that can cause such a behavior difference, as the configs are mostly the same. Maybe you can see something that I cant see. Note that our tables are all InnoDB, so the MyISAM difference is (probably) not relevant. Maybe it is the MySQL Community Server (GPL) that is installed on the production server that can cause the slow performance? Or maybe it needs to be configured differently for 64bit ? I'm currently out of ideas...

    Read the article

  • APC Smart UPS network shutdown issue

    - by Rob Clarke
    Here is a bit about our setup: We have 2x Smart-UPS RT 6000 XL units with network management cards We are running Powerchute from a network server Powerchute is connected to the management cards of both UPSs UPSs are set to do a graceful shutdown via Powerchute when the battery duration is under 20 minutes We also have a command file that runs with Powerchute Although our setup is redundant we do not have an equal load on each server due to APC switches for single power devices The problem is that as we do not have an equal load on each server the batteries drain at different rates. This means that the UPSs both get to the specified low battery duration at completely different times. The problem here is that UPS 1 may have run down to 5 minutes and is in desperate need of initiating a Powerchute shutdown - UPS 2 still has 25 minutes of runtime so no shutdown is initiated. Consequently UPS 1 goes down and takes all the servers with and then shuts down UPS 2 as well! What we need to happen are 1 of either 2 things: Powerchute initiates the shutdown as soon as either UPS reaches the 20 minutes low battery duration setting - and doesnt wait for both The UPS with the heavier load expends its entire battery but does not shutdown both UPSs and lets the load be switched across to the UPS that still has runtime remaining. That way when the UPS that still has runtime reaches its low battery duration it can proceed with the graceful shutdown via Powerchute. Hope that makes sense, any help is greatly appreciated!

    Read the article

  • Alias using Nginx causing phpMyAdmin login endless loop

    - by Seb Dangerfield
    Recently I've been trying to set up a web server using Nginx (I normally use Apache). However I've ran into a problem trying to set phpMyAdmin up on an alias. The alias correctly takes you too the phpMyAdmin login screen, however when you enter valid credentials and hit go you end up back on the login screen with no errors. Sounded like a cookie or session problem to me... but if I symlink the phpMyAdmin directory and try logging in through the symlinked version it works fine! Both the symlink and the alias one set the same number of cookies and both set seem to set the cookies for the correct domain and path. My Nginx config for the php alias is as follows: location ~ ^/phpmyadmin/(.*\.php)$ { alias /usr/share/phpMyAdmin/$1; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $request_filename; } I'm running Nginx 0.8.53 PHP 5.3.3 MySQL 5.1.47 phpMyAdmin 3.3.9 - self install And php-mcrypt is installed. Has anyone else experienced this behaviour before? Anyone have any ideas about how to fix it?

    Read the article

  • Massive SQL issue shutting down our site.

    - by Pselus
    Our website has started timing out like crazy today. All of our clients are finding it unusable. The only error we can seem to trace down as a potential problem is this: SQLAllocHandle on SQL_HANDLE_DBC failed Error ASP Description Error Category Microsoft OLE DB Provider for ODBC Drivers I have no idea what it means or how to go about fixing it. Anyone ever encountered this error before? Currently, you can log in to our site, but then once you go to do anything else, you find yourself logged out or nothing happens. We have a lot of Ajax going on so the "nothing happens" probably has to do with the Ajax pages not loading properly due to logouts and so nothing displays to the user. Like I said, I'm at a loss. Anyone have any advice on this error? EDIT I realize that this isn't necessarily a programming question, but we are a small startup company that just yesterday started talking about how we need to get a backup server running. Apparently we talked about it too late. We don't have a DBA, just 2 mid level programmers trying their hardest to keep our clients happy. So please, if you have any assistance give it but please don't close my question right now. EDIT 2 Turns out we had something on our server running called "ServerMask" that makes our IIS server look like Apache to the outside world. Shutting it down fixed our issue. Still no idea why it was messing up but it was the problem apparently. Thanks to everyone who tried to help.

    Read the article

  • eXist-db: can't start webstart client on a closed port, reverse proxied via apache

    - by rvdb
    I am configuring an Apache HTTP server so it reverse proxies requests starting with /app/ to an eXist-db instance running in a Tomcat server, on port 8082. This port has been closed in the firewall and is inaccessible to the outer world. Following the eXist documentation, I have following rules in place in my httpd.conf file: ProxyPass /apps/ http://localhost:8082/ ProxyPassReverse /apps/ http://localhost:8082/ ProxyPassReverseCookiePath /apps/ / All goes well for requests to e.g. 'http://mydomain/apps/exist/index.xml'. Yet, the webstart client (accessible at 'http://localhost:8082/exist/webstart/exist.jnlp' on the web server) doesn't work behind the proxy. While 'http://mydomain/apps/exist/webstart/exist.jnlp' does generate a valid exist.jnlp file, that file can't be executed. The reason seems quite obvious: apparently, the eXist-db instance generating the exist.jnlp file only sees the proxied request as: 'http://localhost:8082/exist/webstart/exist.jnlp'. Yet, since the exist.jnlp file is executed on the client, that reference is meaningless (unless the client computer happens to have an eXist-db instance running on that port). Executing the exist.jnlp file hence fails with a 'connection refused' error. Yet, there's no problem at all connecting a local eXist-db Java client to the proxied eXist instance with the URL xmldb:exist://mydomain/apps/exist/xmlrpc. The problem lies in generating the webstart exist.jnlp file, which seems to need access to a publicly accessible URL. However, opening port 8082 and replacing the Proxy references to 'http://localhost:8082' with 'http://mydomain:8082' IMO rather destroys the point of reverse proxying. Do others have had success reverse proxying eXist-db on a closed port behind Apache? Are there perhaps some Proxy configuration settings I have overlooked (I'm no expert at all) that can make eXist see the original request instead of the proxied one? Kind regards, Ron

    Read the article

  • Mustek 1200 CP driver SFC4.SYS bluescreens with BAD_POOL_HEADER

    - by Slink84
    I have Windows XP SP2. Recently it started bluescreening right after starting up with 'BAD_POOL_HEADER', 0x00000019 error caused by SFC4.SYS driver. After googling for a while I've found out that this is my Mustek's 1200 CP scanner driver. Booting in safe mode and uninstalling it solved the problem... And created another one: now I can't use my scanner. The weird thing is, that it has been working for a while on this PC without any problems. It all started suddenly, and I can't remember installing anything that might have affected it. Reverting to several earlier system restore points didn't help. I've tried re-installing it from the Mustek website, just in case if my copy got corrupted or infected by a virus, but it did not help - it still bluescreens. Also, I've installed Avast and scanned my PC - there were no viruses found. If anyone had such a problem before or has an idea what might have caused it, please help. ED: @Michael Todd: ...try installing on another PC... I've installed it on my friends PC. He has the same OS version, with the latest updates just like mine (he wasn't too happy, even after I've assured him that it is easy to fix by uninstalling that driver :] ). It worked fine - no bluescreens or whatsoever. So I think I've narrowed it down to either BIOS settings, or some wicked driver conflict. Next thing I'm going to try is to re-install XP, or install windows 7. I'm not too happy with a prospect of mucking about with BIOS settings...

    Read the article

  • Error importing large MySQL dump file which includes binary BLOBs in Windows

    - by Daniel Magliola
    I'm trying to import a MySQL dump file, which I got from my hosting company, into my Windows dev machine, and i'm running into problems. I'm importing this from the command line, and i'm getting a very weird error: ERROR 2005 (HY000) at line 3118: Unknown MySQL server host '+?*á±dÆ-N+Æ·h^ye"p-i+ Z+-$?P+Y.8+|?+l8/l¦¦î7æ¦X¦XE.ºG[ ;-ï?éµ?º+¦¦].?+f9d릦'+ÿG?-0à¡úè?-?ù??¥'+NÑ' (11004) I'm attaching the screenshot because i'm assuming the binary data will get lost... I'm not exactly sure what the problem is, but two potential issues are the size of the file (2 Gb) which is not insanely large, but it's not trivially small either, and the other is the fact that many of these tables have JPG images in them (which is why the file is 2Gb large, for the most part). Also, the dump was taken in a Linux machine and I'm importing this into Windows, not sure if that could add to the problems (I understand it shouldn't) Now, that binary garbage is why I think the images in the file might be a problem, but i've been able to import similar dumps from the same hosting company in the past, so i'm not sure what might be the issue. Also, trying to look into this file (and line 3118 in particular) is kind of impossible given its size (i'm not really handy with Linux command line tools like grep, sed, etc). The file might be corrupted, but i'm not exactly sure how to check it. What I downloaded was a .gz file, which I "tested" with WinRar and it says it looks OK (i'm assuming gz has some kind of CRC). If you can think of a better way to test it, I'd love to try that. Any ideas what could be going on / how to get past this error? I'm not very attached to the data in particular, since I just want this as a copy for dev, so if I have to lose a few records, i'm fine with that, as long as the schema remains perfectly sound. Thanks! Daniel

    Read the article

  • Windows XP error message: "Windows cannot find 'explorer.exe'"

    - by Meysam
    In Windows XP I can open "My Computer" and see all the hard drives. I can also see the explorer.exe process running among other processes in Task Manager. But after opening "My Computer", when I double click on one of the drives to open it, I get the following error message: Windows cannot find 'explorer.exe'. Make sure you typed the name correctly, and then try again. To search for a file, click the start button, and then click search. Although I could detect and remove several suspicious files using Malwarebytes & Microsoft Security Essentials, the problem still remains. The interesting point is that if I right click on one folder and select Open or Explore from the menu bar, I can open the folder! but if I double click on the folder, it does not open and I get the above error message. How can I fix this problem? Any advice would be appreciated! Update: I formatted the C: drive (NTFS), a deep format, and installed a fresh Windows XP on it. I am not getting this error when I double click on C drive icon anymore. But the same error appears when I double click on other drive names. Maybe I should format them too!

    Read the article

  • Improving browser performance while using lots of tabs?

    - by Andrew
    My browsing habits cause me to open lots of windows and tabs, either related to different projects I'm working on or things I may want to read later. I use OSX and use about 5 spaces with multiple windows in each space. The problem is eventually I'll have around 200 or more tabs open (spread over 15-20 windows) that I don't want to close. Needless to say, my computer's performance starts to degrade. As I write this on my mobile, Safari on my laptop is locking up the computer. I used to use Chrome but found better performance with Safari. What I'd like to know, is there a graph of browser performance based on tab usage? I don't need a browser that keeps all tabs active. It would be great if the browser could increase performance by "putting tabs to sleep". Or if there was some sort of tool for saving a "workspace" of tabs that you could reactivate the next time you are working on that project. What sort of solution can you recommend to solve this problem?

    Read the article

  • Very slow connection to xserve via afp or smb

    - by Mhoffman13
    Help. File transfer and connection speed to our Xserve are painfully slow from newly purchased iMacs. The Xserve is only used as a file server, its running 10.4.11. The problem seems to be only happening on brand new iMacs running 10.6.3. When connected either over afp or smb copying files is many times slower than usual. Other machines on the network running either 10.4 or 10.5 have a normal connection speed. To try to rule out OS incompatibility I connected the new iMac running 10.6 to another computer running 10.4 over the network. The file transfer speed was fast as normal. So it seems the problems lies with the X serve (maybe). The afp logs either access or error don't show anything unusual. One thing that did look different was when the imac was connected to the Xserve the user had its id listed as its IP address. The other machines connected, had the id of broadcasthost. I also noticed that when connected from the new iMac I can only see one of the mirrors. When any other computer connects both mirrors are shown. Tried a restart of the Xserve but the problem persists. Thanks in advance for any advice

    Read the article

  • Automated Scanning for Corrupted Archive Files

    - by Synetech inc.
    Hi, I have all kinds of files that I have downloaded from everywhere over the years scattered around my hard drives. I’m in the process of trying to organize them all and have run into a problem. Sometimes a file was not downloaded correctly (this was a big issue when Chromium first came out) and is thus corrupt. For media files, this is relatively simple to determine (it requires actually opening and examining the file). For executables or other binaries, it is relatively difficult (executables may or may not crash, other binaries could be completely unknown). Archive files (eg zip, rar, 7z, exe, ace, etc.) however should be pretty simple, they have a built-in corruption detection facility. My problem now is that I really don’t want to open and test each and every single archived file throughout my drive; that would be a nightmare. I’m looking for a utility that can automate the process. Is there a program that can scan archive files on a drive and list the ones that are corrupt? Thanks a lot.

    Read the article

  • Webmin ADSL module

    - by expatcm
    I was wondering if the Webmin ADSL module is going to help me solve a problem .... but I cannot find any documentation telling me what the module does ..... Any ideas? What I am hoping is that it will solve a problem .... I am just in the process of setting up a Debian server. I will use the DHCP server as part of the Debian setup to manage the lan IP addresses. I want to turn off the external DHCP server which is part of the Linksys ADSL modem / router and use just the modem. The challenge I have is knowing what I need to do in order to get the public DNS on the eth1. When I turn off the DHCP on the modem / router not a lot happens apart from no longer being able to access the settings .......... So I am looking at this Webmin module and wondering if it is to manage the ADSL connection and find the public DNS address .... The local DHCP server is working well for the lan, I am just stuck for the external DNS.

    Read the article

  • User directive in nginx generates error despite running as UID root

    - by Joost Schuur
    I'm running nginx on a MacOS X machine, installed with brew, and when I launch nginx, even with sudo, I get the following warning in my log file over and over again: 4/21/11 2:03:42 AM org.nginx[3788] nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /usr/local/etc/nginx/conf/nginx.conf:2 From nginx.conf: user jschuur staff; I'm already launching nginx with sudo, since I want the thing to listen on port 80. Shouldn't that be enough to give it the proper super user privileges? The nginx binary as it's installed: jschuur@Glenna:sbin ? master ls -la total 4544 drwxr-xr-x 3 jschuur staff 102 Apr 12 20:53 . drwxrwxr-x 15 jschuur staff 510 Apr 12 15:25 .. -rwxr-xr-x 1 jschuur staff 2325648 Apr 12 20:39 nginx FWIW, I recompiled the binary to set passenger up and moved it around from it's original location into /usr/local/sbin. Update: As it turns out MacOS X was restarting nginx after I'd stopped it, because the launchd plist in ~/Library/LaunchAgents had set it to 'KeepAlive'. However, because I installed this plist into my local user's LaunchAgents folder as opposed to /Library/LaunchAgents (or better yet /Library/LaunchDaemons, which run before you even log on), it wasn't executed as root. Because of an error about not having permissions to use port 80, it actually exited right away, but still wrote to the same log file as the nginx process I started with sudo. I had thought the errors stemming from the automatic restart were actually coming from my manual restart via sudo. So, bottom line, problem solved. The real problem here was the homebrew instructions specifically asking you to install the plist file into an area that wouldn't allow a local site to use port 80.

    Read the article

  • CentOS 6.0 yum audacity dependency

    - by Kaemic
    I'm trying to install audacity on centos using yum and I cant force yum to resolve all dependencies, here's what i got, can someone help me with it (when I download the rpm file and click-installit i get the dependency problem too): # yum --disablerepo=c6-media install audacity-1.2.3-2.2.el4.rf.i386.rpm Loaded plugins: fastestmirror, refresh-packagekit Loading mirror speeds from cached hostfile * base: mirror.karneval.cz * centosplus: mirror.karneval.cz * extras: centos.vieth-server.de * rpmforge: fr2.rpmfind.net * updates: mirror.karneval.cz Setting up Install Process Examining audacity-1.2.3-2.2.el4.rf.i386.rpm: audacity-1.2.3-2.2.el4.rf.i386 Marking audacity-1.2.3-2.2.el4.rf.i386.rpm to be installed Resolving Dependencies --> Running transaction check ---> Package audacity.i386 0:1.2.3-2.2.el4.rf set to be updated --> Processing Dependency: wxGTK >= 2.4.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0(WXGTK_2.4) for package: audacity-1.2.3-2.2.el4.rf.i386 --> Running transaction check ---> Package audacity.i386 0:1.2.3-2.2.el4.rf set to be updated --> Processing Dependency: libwx_gtk-2.4.so.0 for package: audacity-1.2.3-2.2.el4.rf.i386 --> Processing Dependency: libwx_gtk-2.4.so.0(WXGTK_2.4) for package: audacity-1.2.3-2.2.el4.rf.i386 ---> Package wxGTK.i686 0:2.8.12-1.el6.rf set to be updated --> Finished Dependency Resolution Error: Package: audacity-1.2.3-2.2.el4.rf.i386 (/audacity-1.2.3-2.2.el4.rf.i386) Requires: libwx_gtk-2.4.so.0 Error: Package: audacity-1.2.3-2.2.el4.rf.i386 (/audacity-1.2.3-2.2.el4.rf.i386) Requires: libwx_gtk-2.4.so.0(WXGTK_2.4) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest

    Read the article

  • Proxy Settings per Machine not working on windows server 2008 R2 SP1

    - by Anirudh Goel
    i have a very interesting problem and would appreciate any help for it. In my scenario i have scripts which bring up a VM inside a domain. Now i want to enable internet access for all the VM's and they go through a proxy. I interact with the VM's using remote sessions and use the credentials of a user which belogs to the domain administrator group. Now problem is that, i create VM's on the fly and destroy them as well,and the scripts i run during their lifetime require internet access on them.So i cannot statically set the proxy settings thus i used the option of Active Directory Group Policy Management. I initially used the "User Configuration" option and set the proxy, which worked like a charm when ever i log inside the machine. However it doesn't work if i use to remote login to the machine with an account which has not yet logged in to the machine. So i used this link to configure it to work on Per Machine, the group policy has worked fine and it reflects in the browser too. But i am not able to resolve any dns name like http://www.google.com or any internet based site. Any idea what i can do?

    Read the article

  • Windows 7: Setting up backup to an external hard drive on another computer on the network

    - by seansand
    I have an external hard drive connected to a Windows 7 (Home Edition) computer. I have another computer (with Windows 7 Ultimate), and I want to have the Windows 7 Ultimate back up to the same external hard drive, without having to disconnect and move the external hard drive from the Home Edition PC. When I get to the "Set up backup" dialog within Windows 7, it asks me where to save the backup. I select "Save on a network". However, when I enter "\\computername\harddrivename" under Browse, the "OK" button remains grayed out. The button remains grayed out unless I also enter a Username and Password under "Network credentials". However, the account I have on the other computer doesn't have a password for it. To un-gray out the button I must enter a fake password, allowing me to click "OK", but then obviously I get a "bad password" error. Does anyone know how to get around this problem? (Seems kind of ridiculous.) I made sure that the security settings with the external hard drive on the other computer are full access to Everyone, so permissions is not the problem. I also thought about using Homegroup instead of the regular security settings, but there is no obvious way to go about it that way, either.

    Read the article

  • Weird fluctuating time on a XEN linux guest

    - by Vin-G
    I have a weird problem with some servers here at work. We have a few XEN guests who's current time fluctuates. # date;date;date;date;date;date;date Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 Thu Feb 25 16:00:48 PHT 2010 Thu Feb 25 16:00:40 PHT 2010 As seen above, the time fluctuates between 16:00:48 and 16:00:40, which is problematic for us since computing for time differences in some of our scripts becomes inaccurate (ex. what should be a few ms differences becomes some few second differences, and even sometimes, negative differences). The problematic servers are linux guests on a XEN host. The time fluctuates on the guest systems, but it is okay in the host itself. I've ruled out ntpd since this happens irregardless of whether ntpd is running or not on the guest systems. Guest is on full virtualisation. The time on both the host and the guest does match except that the time in the guest fluctuates at about a few seconds from the host's time, and the host time does not fluctuate. /proc/sys/xen/independent_wallclock is 0 in the host and does not exist in the guest. Ntpd service was stopped and disabled. Setting independent_wallclock to 1 in the host has no effect (that is, time still fluctuates in the guest). Though I was not able to restart the guest as it is a production server. Might be able to do that over the weekend. Any ideas on what to check and how to resolve this problem?

    Read the article

  • Error upgrading Ubuntu server from Intrepid to Jaunty

    - by Martin
    I'm trying to upgrade an old ubuntu server from 8.10 (Intrepid) to 9.04 (Jaunty). But it fails. root@server1:/# do-release-upgrade Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Does anyone have an idea why I get this error and how to fix it? UPDATE: I think i might have tracked the problem down. My /etc/update-manager/meta-release looks like this: [METARELEASE] URI = http://changelogs.ubuntu.com/meta-release URI_LTS = http://changelogs.ubuntu.com/meta-release-lts URI_UNSTABLE_POSTFIX = -development URI_PROPOSED_POSTFIX = -proposed If i go to http://changelogs.ubuntu.com/meta-release it has this info for Jaunty: Dist: jaunty Name: Jaunty Jackalope Version: 9.04 Date: Thu, 23 Apr 2009 12:00:00 UTC Supported: 0 Description: This is the 9.04 release Release-File: http://archive.ubuntu.com/ubuntu/dists/jaunty/Release ReleaseNotes: http://changelogs.ubuntu.com/EOLReleaseAnnouncement UpgradeTool: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz UpgradeToolSignature: http://archive.ubuntu.com/ubuntu/dists/jaunty-proposed/main/dist-upgrader-all/0.111.8/jaunty.tar.gz.gpg Those links starting with archive.ubuntu.com are broken since jaunty is EOL. I guess i could fix this by copying this file, replacing "archive" with "old-releases", host the modified file somewhere and change the url in the meta-release file. Is this a good solution or will it make me run into worse problems?

    Read the article

  • Windows Remote Desktop: "configuring remote session" closes without error

    - by icelava
    I have a desktop/laptop pair at home operating x64 Windows 7 (the desktop was upgraded from Windows Vista, works just fine). I remote desktop to them on a daily basis when outside. In recent weeks, I would occasionally fail to connect to my desktop. It can connect and authenticate fine, but the "configuring remote session" dialog would simply close and not show me the desktop window or any error message. There is no error event log relating to this on the desktop computer. Some suggestions call for disabling remote audio, which mine already is, but trying different audio modes did not yield any different result. I am not too sure if this is related to video card drivers (they do get auto-updated), since remote desktop video is supposed to steer via a virtual device driver? Nonetheless the desktop operates three monitors via an ATI Radeon HD5770 (1 Displayport, 2 DVI). I do not see a real problem with that since I can mostly connect and operate it remotely. I try to "remote tunnel" via my home laptop but obviously won't work either as the problem lies in the desktop. What other conditions can cause remote desktop to break without error? UPDATE I came home and still couldn't connect to the desktop until I restarted the entire system.

    Read the article

  • How can I fix video tearing and pausing on Windows XP Flash videos?

    - by xvs
    I have what should be a reasonably fast PC: it's a Quadcore Intel 6600 at 2.4 GHz, 4MB of RAM, an ATI 3800 series video card and an LG L246WP monitor, which I selected particularly because it was supposed to work well with video and have no trails or other artifacts. So I should be able to play video with no problems. And I can, as long as that video isn't Flash video. With Flash, what I see is tearing, especially during pans, and pausing -- every few seconds the video pauses for about 300ms while the sound stays continuous. I tried going into the video card setup and changing vertical sync, pulldown detection, windows media video acceleration, deinterlacing and triple buffering. but no combination of settings I've tried has changed or corrected the problem in any way. I've also tried enabling and disabling hardware acceleration in the Flash settings, to no avail. This problem happens whether or not the video is streaming or has fully streamed in before playing. So, what can I do? Is this just a flash issue or is there a way to get it to work?

    Read the article

< Previous Page | 729 730 731 732 733 734 735 736 737 738 739 740  | Next Page >