Search Results

Search found 49026 results on 1962 pages for 'open source alternative'.

Page 614/1962 | < Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >

  • Starting x11vnc remotely when X server is already running

    - by Madiyaan Damha
    I have a ubuntu linux machine that I have already logged into and X server is running (it is pass the login manager like gdm). I can access this machine through ssh. My goal is to start x11vnc on this machine and attach it to the X server that is already running. When I ssh into the machine and start x11vnc, it says: X11 was unable to open the X DISPLAY ":0", it cannot continue. How can I start x11vnc on the remote machine if I don't have physical access to it and Xserver has already started. The reason I want to do this is because the remote machine has several windows open that I want to work on. Thanks,

    Read the article

  • How to automatically save sessions with multiple windows in FireFox

    - by Matthew Talbert
    I've used primarily FF's built-in session management until recently. Now my needs have become more sophisticated. What I want is to be able to have two windows, one with a fixed set of tabs (approximately 5) and the other with "automatic save". That is, when I start FF, I want 1 window to open with my 5 tabs, and another to open with whatever I had when I shut down FF. I've installed "Session Manager", but I can't seem to get it to do what I want. It will save one window, but when I close one window, it removes that one from the session. Any suggestions to do this with either Session Manager or another plugin would be great.

    Read the article

  • svnsync loses revision properties although hook installed

    - by roesslerj
    Hello all! I have a pretty weird problem. We have setup an SVN-Mirror via cronjob (because it needs to go from inside to outside of a firewall, so no post-commit-hook possible) and svnsync. We installed a pre-revprop-hook just as told. Everything seems to work fine, except that it doesn't. E.g. when manually executing the script. # svnsync --non-interactive sync file://<path-to-mirror> --source-username <usr> --source-password <pwd> Committed revision 19817. Copied properties for revision 19817. No error, no complaints. But if checking for the revision properties it says: # svnlook info <path-to-mirror> 0 # svn info -r HEAD file://<path-to-mirror> 2>&1 Path: <root-of-mirror> URL: file://<path-to-mirror> Repository Root: file://<path-to-mirror> Repository UUID: <uid> Revision: 19817 Node Kind: directory Last Changed Rev: 19817 So somehow the author and timestamp information gets lost. But we need that information for our internal processes. Since no error or warning is produced I have absolutely no idea even where to start to look. Everything is local (except for the remote master), so there are no server-logs to look at. I also tried to manually recopy via svnsync copy-revprops (http://chestofbooks.com/computers/revision-control/subversion-svn/svnsync-Copy-revprops-Ref-svnsync-C-Copy-revprops.html). It says Copied properties for revision 19885. But when I query them, it's just the same. Any ideas how I could approach that problem, or even better -- how to solve it? Any ideas appreciated.

    Read the article

  • Wrong CSS mime type with Roundcube 0.5 beta and nginx

    - by Julien Vehent
    I'm running into a CSS problem. This is a setup based on Debian Squeeze (nginx/0.7.67, php5/cgi) on which I installed the latest Roundcube 0.5 beta. PHP is properly processed, login works fine but the CSS files are not loaded and Firefox is throwing the following errors: Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/common.css?s=1290600165 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 Error: The stylesheet https://webmail.example.net:10443/roundcube/skins/default/mail.css?s=1290156319 was not loaded because its MIME type, "text/html", is not "text/css". Source File: https://webmail.example.net:10443/roundcube/?_task=login Line: 0 As far as I understand, nginx doesn't see the .css extension (because ofthe ?s= argument) and thus set the mime type with the default value, being text/html. Should I fix this in nginx (and how ?) or is it roundcube's related ? Edit: It seems that it's nginx related. The content-type isn't set for any other type than text/html. I had to include manually the following declarations to force CSS and JS content-types. That's ugly, and I never had the problem before... any idea ? location ~ \.css { add_header Content-Type text/css; } location ~ \.js { add_header Content-Type application/x-javascript; }

    Read the article

  • Excel crashes when opening Excel files from Internet Explorer

    - by Rob Z
    I have been running into some issues when opening Excel files from Internet Explorer, generally the first document or two will open fine but after that trying to open a file will cause Excel and Internet Explorer to crash to the desktop without any notifications being given. This doesn't happen for users who are running Excel 2007, but for users with Excel 2003 it may or may not happen to them. The files in question are Excel XML files and Internet Explorer 6 and Excel 2003 are being use. At this time it would not be possible to upgrade Internet Explorer, but it would be able to upgrade to Excel to version 2007 if that would resolve the issue. Overdue Update: We recently upgraded to Firefox at the office which has rendered this error a non-issue; however, it is still unresolved from the standpoint that we haven't been able to come up with an explanation to the issue. Since IE6 is still installed on the systems, a fix to the problem (or explanation of why it's happening) would be appreciated.

    Read the article

  • Office 2011 Mac - Unable to save Word files, plus normal.dot alert errors

    - by Jeff D
    There are actually 3 errors here. When I open Word, I get: Word cannot open the existing global template. () If I create a file, type a character and try to save to the desktop (that I have no problems writing to otherwise), I get: Word cannot save or create this file. The disk may be full or write-protected. Try one or more of the following: * Free more memory * Make sure that the disk you want to save the file on is not full, write-protected, or damaged. () I am just saving to the desktop, and I can save excel files (or anything else) there. After the failure, if I save again, the default file name becomes: .doc...doc Weird. Finally, when I close word completely, I get: Do you want to replace the existing Normal.dotm.

    Read the article

  • Windows zip error: Windows cannot complete the extraction. The destination file could not be created

    - by Snake Eyes
    I have a strange error in Windows 7 when I want to execute/open a file inside ZIP archive. I have two files: File1.dwg File2.dwg The archive is not corrupted ( I checked with 7zip utility ) When I double click on the any file inside ZIP (I opened ZIP with Windows Explorer), the error occured: Ok. If I open zip file with 7-zip or WinRAR every file could be opened(executed) without error and the AutoCAD is opened to see the dwg content. Why in Windows ZIP ? What points have I to see to manage/remove this error ? Thank you. UPDATE If I click Extract all from Windows ZIP, the error occured as unspecified error:

    Read the article

  • Mounting FTP as filesystem in debian using curlftpfs

    - by Karel Bílek
    I am trying to mount a FTP as filesystem in debian using curlftpfs. What I get after running curlftpfs -o allow_other username:[email protected] /mnt/myftp/ is just: fuse: failed to open /dev/fuse: Permission denied even when run as root. What am I doing wrong? (curlftpfs is in version curlftpfs 0.9.2 libcurl/7.21.0 fuse/2.8) edit: When I write ls -lah /dev/fuse, I see crw-rw---- 1 root fuse 10, 229 Apr 9 00:34 /dev/fuse ...but even when I add both myself and user root to group fuse, neither me (as a user) or user root can mount ftp, I still see fuse: failed to open /dev/fuse: Permission denied edit2: Even if I write this fairly insecure and crazy line: sudo chmod a+rwx /dev/fuse I still get the permission denied message. Which permissions could be denied? edit3: I forgot to mention I am on VPS with OpenVZ. I thought that there is no issue with this, but apparently, there is! I am adding the resolution as the answer.

    Read the article

  • How to configure nginx to serve static contents from RAM?

    - by Vijayendra Tripathi
    I want to set up nginx as my web server. I want to have image files cached in the memory (RAM) rather then disk. I am serving a small page and want few images always served from RAM. I dont wish to use varnish (or any other such tools) for this as I believe nginx has a capability to cache contents into RAM. I am not sure as how may I configure nginx for this? I did try few combinations but they didn't work. nginx uses disk all the time to get images. For example, when I tried apache benchmark to test with following command - ab -c 500 -n 1000 http://localhost/banner.jpg I get following error - socket: Too many open files (24) I guess this means nginx is trying to open to many files simultaneously from the disk and OS is not allowing this operation. Can anyone please suggest me a correct configuration? Thanks for considering this message.

    Read the article

  • Visio Losing My Diagrams!

    - by bobber205
    I've created under "Static Model-Top Package" 4 different sequence diagrams. Once I tried creating the fifth it won't keep it. I try to double click and open it and it opens the 4th one. I can delete the 5th one and create a new one. I can edit this but as soon as I go to another diagram, I've lost it and can only go back to the 4th one and edit that even though I right clicked the 5th one and selected "Open Sequence". What is going on? Has anyone else seen this Visio issue?

    Read the article

  • How to stop receiving an error reading iCal invitations on Mac sent from Thunderbird?

    - by Matthew Rankin
    When I open an iCal invitation (Mail Attachment.ics file) in Mac Mail.app sent to me from someone using Thunderbird, I receive the following error: The only solution that I've found to date is to save the Mail Attachment.ics file somewhere like the Desktop, open it in a text editor, and delete the blank lines at the beginning and end of the file. This problem appears to only happen when I'm receiving invitations from people using Thunderbird under Windows. Does anyone know of a better way to solve this problem? Is this a bug in Apple iCal, or is Thunderbird not meeting the iCal spec? Configuration Information OS: Mac OS X 10.6.2 Mail Program: Mac Mail.app ver. 4.2 (1077) Calendar Program: Mac iCal.app ver 4.0.1 (1374)

    Read the article

  • what might cause a print error in perl?

    - by Mark J Seger
    I have a long running script that every hour opens a file, prints to it and closes the file. I've recently found very rarely, the print is failing, not because I'm testing the status of the print itself but rather due to the fact of missing entries in the file until the system is actually rebooted! I do trap for file open failures and write a message to syslog when that happens and I'm not seeing any open failures so I'm now guessing it may be the print that is failing. I'm not trapping the print failures, which I suspect most people don't but am now going to update that one print. Meanwhile, my question is does anyone know what types of situations could cause a print statement to fail when there is plenty of disk storage and no contention for a file which has been successfully opened in append mode?

    Read the article

  • Access denied error 3221225578 with file sharing to Windows server

    - by Ian Boyd
    i'm trying to access the shares on a server. The credential box appears, and i enter in a correct username and password, and i get access denied. The silly thing is that i can Remote Desktop to the server (using the same credentials), and i can check the Security event log for the access denied errors: Event Type: Failure Audit Event Source: Security Event Category: Account Logon Event ID: 681 Date: 3/19/2011 Time: 11:54:39 PM User: NT AUTHORITY\SYSTEM Computer: STALWART Description: The logon to account: Administrator by: MICROSOFT_AUTHENTICATION_PACKAGE_V1_0 from workstation: HARPAX failed. The error code was: 3221225578 and Event Type: Failure Audit Event Source: Security Event Category: Logon/Logoff Event ID: 529 Date: 3/19/2011 Time: 11:54:39 PM User: NT AUTHORITY\SYSTEM Computer: STALWART Description: Logon Failure: Reason: Unknown user name or bad password User Name: Administrator Domain: stalwart Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Workstation Name: HARPAX Looking up the error code (3221225578), i get an article on Technet: Audit Account Logon Events By Randy Franklin Smith ... Table 1 - Error Codes for Event ID 681 Error Code Reason for Logon Failure 3221225578 The username is correct, but the password is wrong. Which would seem to indicate that the username is correct, but the password is wrong. i've tried the password many times, uppercase, lowercase, on different user accounts, with and without prefixing the username with servername\username. What gives that i cannot access the server over file sharing, but i can access it over RDP?

    Read the article

  • sharepoint 2010 search error give me correlation id and event viewer is empty and no log there

    - by saber tabatabaee yazdi
    we have a farm of SharePoint 2010 that worked properly since last week. we configure and start the search service last week and it is also worked and when we test it with some criteria , worked , and results appear for us so we are very happy. but after few days when we search , occur this error: after that we delete application service and reconfure. and now when we open any document libraries occur this error: but lists open correctly without any error An unexpected error has occurred. Troubleshoot issues with Microsoft SharePoint Foundation. Correlation ID: 5becf903-d13e-4490-a23c-d7e4f68ca769 please help us.

    Read the article

  • Opening an oracle database crashes the service

    - by tundal45
    I am experiencing a weird issue with Oracle where the service started fine after a crash. The database mount went fine as well. However, when I issue alter database open; command, the database does not open, gives a generic cannot connect to the database error & crashes the service. Oracle support has not seen this issue before so it's pretty scary. The fact that there are no logs that give any leads as to what could be causing this is also scary. I was wondering if good folks over at Server Fault had seen something like this or have some insights on things that I could try. It's Oracle 10g running on Windows Server 2003. Thanks, Ashish

    Read the article

  • Fail2Ban adds iptable rules but they are not working?

    - by EApubs
    Fail2Ban just blocked my IP for 3 SSH attempts. It added the iptables rule and I can see it using the "sudo iptables -L -n" command. But I can still access the site and login through SSH! What might be the problem? Is it because im using CloudFlare? I have set Nginx to write the real IPs to the access logs instead of the Cloud Flare IP. Isn't it enough? Chain fail2ban-ssh (1 references) target prot opt source destination DROP all -- 119.235.14.8 0.0.0.0/0 RETURN all -- 0.0.0.0/0 0.0.0.0/0 The input chain : Chain INPUT (policy DROP) target prot opt source destination fail2ban-NoAuthFailures tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 fail2ban-nginx-dos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 80,8090 fail2ban-postfix tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 25,465 fail2ban-ssh-ddos tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 fail2ban-ssh tcp -- 0.0.0.0/0 0.0.0.0/0 multiport dports 22 ufw-before-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-before-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-after-logging-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-reject-input all -- 0.0.0.0/0 0.0.0.0/0 ufw-track-input all -- 0.0.0.0/0 0.0.0.0/0 LOG all -- 0.0.0.0/0 0.0.0.0/0 LOG flags 0 level 4

    Read the article

  • Windows XP SP3 TCP/IP No buffer space available

    - by Natalia
    I have the exactly same problem as here: Windows XP TCP/IP No buffer space available On Windows XP Pro, SP3 if one does an experiment where one tries to open TCP/IP sockets in a loop (bascially, listen port 7000, listen port 7001, etc.) After approx 649 open sockets, one will start getting errors: No buffer space available (maximum connections reached?) I've tried to edit the registry as described here http://smallvoid.com/article/winnt-tcpip-max-limit.html I set MaxUserPort = 65534 and MaxFreeTcbs = 2000, but it didn't help. What else can I do? I need 1000 server sockets. Here is the error stack: 05.04.2012 10:23:57 java.net.SocketException: No buffer space available (maximum connections reached?): listen at sun.nio.ch.ServerSocketChannelImpl.listen(Native Method) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:127) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:52) at channelserver.NIOAppServer.initSelector(NIOAppServer.java:40) at channelserver.NIOAppServer.(NIOAppServer.java:27) at channelserver.NIOServer.main(NIOServer.java:433) at channelserver.NIOServer.main(NIOServer.java:438)

    Read the article

  • Has anyone managed to build php5-xapian on Ubuntu 12.04?

    - by jetboy
    As Xapian's been dropped from the Ubuntu repositories, I'm attempting to build my own .deb from the instructions here: http://article.gmane.org/gmane.comp.search.xapian.general/8855 http://beeznest.wordpress.com/2011/07/06/howto-build-your-own-binaries-of-php-xapian-bindings-for-debian/ I can only get things to progress beyond the first few seconds by leaving out 'rm debian/control', but if I do, it looks as if the Python and Ruby bindings are building and passing their versions of smoketest correctly. However, the PHP part of the build is failing with this error: /home/charlie/xapian-bindings-1.2.8/php/smoketest.php:38: include(xapian.php): failed to open stream: No such file or directory FAIL: smoketest.php There's a xapian.php file in /home/charlie/xapian-bindings-1.2.8/php/php5/ but if I copy it to /home/charlie/xapian-bindings-1.2.8/php/ or change the path to it in smoketest.php, the build fails right near the start with: dpkg-source: error: aborting due to unexpected upstream changes Unfortunately I'm out of my comfort zone building from source. Anyone got any ideas? Edit post James' answer: Builds fine if I follow instructions exactly. I built it on a test VM initially, but that didn't build the PHP package as PHP itself wasn't installed. Obvious gotcha, but worth mentioning. Installing generated the following error: Setting up php5-xapian (1.2.8-1) ... Processing triggers for libapache2-mod-php5 ... dpkg (subprocess): unable to execute installed post-installation script (/var/lib/dpkg/info/libapache2-mod-php5.postinst): Permission denied ssion denied dpkg: error processing libapache2-mod-php5 (--install): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: libapache2-mod-php5 It's only a script for restarting Apache. Stopping Apache before running sudo dpkg -i php5-xapian_*.deb prevents the error. Xapian now shows up in phpinfo(). Job done. Thanks.

    Read the article

  • What ports does Advantage Database Server need?

    - by asherber
    I have an application which uses ADS and I am attempting to deploy it in a Windows network environment with a rather restrictive firewall. I am having a problem configuring firewall ports appropriately. ADS lives on \\server, and it's listening on port 1234. When \\client tries to connect to \\server\tables, I get Error 6420 (Discovery process failed). When \client tries to connect to \\server:1234\tables, I get error 6097, bad IP address specified in the connection path. \\server is pingable from \\client, and I can telnet to \server:1234. If I try to connect from a client machine inside the firewall, either connection path works fine. It seems there must be something else I need to open in the firewall. Any ideas? Thanks, Aaron. Edit: I should have specified that the firewall is open to \\server:1234 specifically for TCP traffic. Is UDP involved here in some way?

    Read the article

  • UAC prompt won't allow non-administrator to access IIS 7 Manager.

    - by Triynko
    The IIS 7 Manager seems to throw up a UAC prompt requiring administrative elevation as soon as the shortcut is clicked. This makes it impossible for a non-admin to use it. When I first created the user account, I could open IIS 7 Manager, but could not connect. I temporarily made the user account an Admin, then opened IIS 7 Manager again. Now that I've removed user from Admin, every time I open IIS 7 Manager it throws up the UAC prompt, making it impossible to do anything with it as a non-Admin. Is this a bug? Is there a way to reset the settings for this user so that IIS 7 Manager is back in some kind of first-run state?

    Read the article

  • FTP in DMZ, TCP Ports for LDAP Auth

    - by sam
    szenario: (outside)---(ASA5510)---(inside) -Windows2008 DC .....................(dmz) ..........-Win2008 FTP Server Which Ports do I need to open from DMZ-Inside that FTP Users can authentificated on the Inside DC? I have allready opend 389 (Ldap), 636 (secure Ldap) and 53 (dns). But the ftp Client stucks allways after processing the credentials and the FTP Server gives you an eventlog "logon error". the error messages indicates that there could be an issue with closed ports. if I turn the ACL to "IP", that means all ports are open, everything is working fine.

    Read the article

  • Ubuntu upgrade process failed

    - by Spin0us
    I tried to dist-upgrade my ubuntu server on my percona cluster but it failed with this message The following packages have unmet dependencies: libmysqlclient18 : Depends: libmariadbclient18 (= 5.5.33a+maria-1~precise) but it is not installable And here is the package listing # dpkg --list | grep -E 'percona|mysql' ii libdbd-mysql-perl 4.020-1build2 Perl5 database interface to the MySQL database iU libmysqlclient18 5.5.33a+maria-1~precise Virtual package to satisfy external depends ii mariadb-common 5.5.33a+maria-1~precise MariaDB database common files (e.g. /etc/mysql/conf.d/mariadb.cnf) ii percona-xtrabackup 2.1.5-680-1.precise Open source backup tool for InnoDB and XtraDB ii percona-xtradb-cluster-client-5.5 5.5.31-23.7.5-438.precise Percona Server database client binaries ii percona-xtradb-cluster-common-5.5 5.5.33-23.7.6-496.precise Percona Server database common files (e.g. /etc/mysql/my.cnf) ii percona-xtradb-cluster-galera-2.x 157.precise Galera components of Percona XtraDB Cluster ii percona-xtradb-cluster-server-5.5 5.5.31-23.7.5-438.precise Percona Server database server binaries ii php5-mysql 5.3.10-1ubuntu3.8 MySQL module for php5 During the install of the server, mariadb and galera cluster have first been installed. Then removed to be replaced by percona XtraDBCluster. So i think this is the source of the problem. But how can i resolve this without reinstalling all ? UPDATE 1 # apt-cache policy libmariadbclient18 libmariadbclient18: Installed: (none) Candidate: (none) Version table: 5.5.32+maria-1~precise 0 100 /var/lib/dpkg/status

    Read the article

  • File Sync Solution for Batch Processing (ETL)

    - by KenFar
    I'm looking for a slightly different kind of sync utility - not one designed to keep two directories identical, but rather one intended to keep files flowing from one host to another. The context is a data warehouse that currently has a custom-developed solution that moves 10,000 files a day, some of which are 1+ gbytes gzipped files, between linux servers via ssh. Files are produced by the extract process, then moved to the transform server where a transform daemon is waiting to pick them up. The same process happens between transform & load. Once the files are moved they are typically archived on the source for a week, and the downstream process likewise moves them to temp then archive as it consumes them. So, my requirements & desires: It is never used to refresh updated files - only used to deliver new files. Because it's delivering files to downstream processes - it needs to rename the file once done so that a partial file doesn't get picked up. In order to simplify recovery, it should keep a copy of the source files - but rename them or move them to another directory. If the transfer fails (network down, file system full, permissions, file locked, etc), then it should retry periodically - and never fail in a non-recoverable way, or a way that sends the file twice or never sends the file. Should be able to copy files to 2+ destinations. Should have a consolidated log so that it's easy to find problems Should have an optional checksum feature Any recommendations? Can Unison do this well?

    Read the article

< Previous Page | 610 611 612 613 614 615 616 617 618 619 620 621  | Next Page >