Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 647/916 | < Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >

  • How to set minimum SQL Server resource allocation for a database?

    - by Jeff Widmer
    Over the past Christmas holiday week, when the website I work on was experiencing very low traffic, we saw several Request timed out exceptions (one on each day 12/26, 12/28, 12/29, and 12/30) on several pages that require user authentication. We rarely saw Request timed out exceptions prior to this very low traffic week. We believe the timeouts were due to the database that it uses being "spun down" on the SQL Server and taking longer to spin up when a request came in. There are 2 databases on the SQL Server (SQL Server 2005), one which is specifically for this application and the other for the public facing website and for authentication; so in the case where users were not logged into the application (which definitely could have been for several hours at a time over Christmas week) the application database probably received no requests. We think at this point SQL Server reallocated resources to the other database and then when a request came in, extra time was needed to spin up the application database and the timeout occurred. Is there a way to tell SQL Server to give a minimum amount of resources to a database at all times?

    Read the article

  • Windows Service with Logon a user

    - by David.Chu.ca
    I have a service running in a box with Windows XP and a box of Server (2003). There is one service running in autmactic mode. If I set the log on with a local user, the service will stop running if the user logs off the account from the Windows? In other words, in order to keep the service running, have I to keep the user as login mode? I understand that the service can be set as local system which will not need any one to login as long as the box is powered on. However, the service I need to run has to be running as a local user with the required permissions. The problem I have right now is that the box may reboot itself. If that happens, no one is logged in yet until some one has to log in as the required user. I want to make sure if the required user has to be as a log on user first. If that's the case, any suggestion to deal with unexpected reboot issue?

    Read the article

  • Install PHP mcrypt on Red Hat 4

    - by Chris
    I'm having a very hard time getting mcrypt for PHP installed on a Red Hat 4 server. I've downloaded the rpm but it tells me: error: Failed dependencies: php-common(x86-32) = 5.4.7-2.fc18 is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(FileDigests) <= 4.6.0-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 libc.so.6(GLIBC_2.4) is needed by php-mcrypt-5.4.7-2.fc18.i686 libltdl.so.7 is needed by php-mcrypt-5.4.7-2.fc18.i686 rtld(GNU_HASH) is needed by php-mcrypt-5.4.7-2.fc18.i686 rpmlib(PayloadIsXz) <= 5.2-1 is needed by php-mcrypt-5.4.7-2.fc18.i686 So when I try to install one of those packages, they also require another 8 packages. So I'm diving into dependency hell here. Now if I try to compile mcrypt from source, this is what I get: checking for libmcrypt - version >= 2.5.0... no *** Could not run libmcrypt test program, checking why... *** The test program failed to compile or link. See the file config.log for the *** exact error that occured. This usually means LIBMCRYPT was incorrectly installed *** or that you have moved LIBMCRYPT since it was installed. In the latter case, you *** may want to edit the libmcrypt-config script: no configure: error: *** libmcrypt was not found But I was able to install libmcrypt from an rpm packages successfully. Any suggestions? Also, I cannot use up2date as it requires an active paid account from Red Hat and since the staff has changed rather rapidly in the last year where I work, no one knows if there even was any support accounts.

    Read the article

  • SELinux blocking Samba directory listing

    - by Sean M
    I am running Samba on a CentOS server, and I am experiencing a problem where it allows me to connect to the server and see a share, but shows the share as an empty directory. I find this behavior strange. Here is the stanza in my smb.conf for the given share: [seanm] path = /home/seanm writeable = yes valid users = seanm, root read only = No Here's what I see on the server side: [seanm@server ~]$ ls -l -rw-r--r-- 1 seanm seanm 40 Jan 4 13:45 pangram.txt And yet: [seanm@client ~]$ smbclient //server/seanm -U seanm -W WORKGROUP Enter seanm's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.0.33-3.29.el5_5.1] smb: \> ls . D 0 Fri Jan 7 10:08:55 2011 .. D 0 Fri Jan 7 07:58:31 2011 58994 blocks of size 262144. 50356 blocks available This behavior is present on both a Windows client and a Linux client system. The behavior is present with the firewall on and with the firewall off, so it's not that. Neither /var/log/messages nor /var/log/secure have any complaints about Samba. I doubt that SELinux is a problem: just in case, here are the relevant settings. [root@server ~]# getsebool -a | grep samba samba_domain_controller --> off samba_enable_home_dirs --> on samba_export_all_ro --> off samba_export_all_rw --> off samba_share_fusefs --> off samba_share_nfs --> off use_samba_home_dirs --> on virt_use_samba --> off What am I doing wrong here, and what can I do to fix it? Edit: SELinux probably is the problem, judging by the fact that the issue goes away when I set SELinux to "permissive" or issue setsebool -P samba_export_all_rw on - both of which are unacceptable for production environments. What the heck kind of context does a directory need to have on it for Samba users to actually get files from it? I consider rolling your own rules and/or context to be deeply sub-optimal.

    Read the article

  • Is there any free software to check for issues with a DVD drive?

    - by AgentConundrum
    I don't usually play movies on my laptop (prefer the standalone with the tv), but I tried to watch one the other night, and noticed playback was really choppy and had audio artifacts in places. I thought it could be related to memory issues so I rebooted and tried again, but the results were the same. I considered that it could be an issue with the disc, so I tried to clean it but again there was no change. I don't think the problem is with the disc, because I tried another disc and it also had the same problem. I don't think I've ever watched the second disc since I've had it, so it should have been safe in its jewel case. Also, there were no issues when I watched an episode off the first disc in my standalone player. What I'm wondering is: are there are any (free) utilities that can check for issues with the drive itself? I looked around but most of the software I found focuses on integrity checks for the disc, not the drive. I have had issues with this laptop recently (had to replace the keyboard when the cat damaged it while I was cleaning dust out and the machine was ripped apart, also replaced part of the chassis after I cracked it when I tried to open it not knowing a screw was still in it) so I may have just replaced the drive incorrectly. I'm going to check on this while I await an answer to this question. Thanks.

    Read the article

  • placing shell script under systemd control

    - by Calvin Cheng
    Assuming I have a shell script like this:- #!/bin/sh # cherrypy_server.sh PROCESSES=10 THREADS=1 # threads per process BASE_PORT=3035 # the first port used # you need to make the PIDFILE dir and insure it has the right permissions PIDFILE="/var/run/cherrypy/myproject.pid" WORKDIR=`dirname "$0"` cd "$WORKDIR" cp_start_proc() { N=$1 P=$(( $BASE_PORT + $N - 1 )) ./manage.py runcpserver daemonize=1 port=$P pidfile="$PIDFILE-$N" threads=$THREADS request_queue_size=0 verbose=0 } cp_start() { for N in `seq 1 $PROCESSES`; do cp_start_proc $N done } cp_stop_proc() { N=$1 #[ -f "$PIDFILE-$N" ] && kill `cat "$PIDFILE-$N"` [ -f "$PIDFILE-$N" ] && ./manage.py runcpserver pidfile="$PIDFILE-$N" stop rm -f "$PIDFILE-$N" } cp_stop() { for N in `seq 1 $PROCESSES`; do cp_stop_proc $N done } cp_restart_proc() { N=$1 cp_stop_proc $N #sleep 1 cp_start_proc $N } cp_restart() { for N in `seq 1 $PROCESSES`; do cp_restart_proc $N done } case "$1" in "start") cp_start ;; "stop") cp_stop ;; "restart") cp_restart ;; *) "$@" ;; esac From the bash script, we can essentially do 3 things: start the cherrypy server by calling ./cherrypy_server.sh start stop the cherrypy server by calling ./cherrypy_server.sh stop restart the cherrypy server by calling ./cherrypy_server.sh restart How would I place this shell script under systemd's control as a cherrypy.service file (with the obvious goal of having systemd start up the cherrypy server when a machine has been rebooted)? Reference systemd service file example here - https://wiki.archlinux.org/index.php/Systemd#Using_service_file

    Read the article

  • How can I know if a replacement display is compatible with my laptop?

    - by moraleida
    I have an Acer Aspire 4810TZ from 2010 which I truly love - it's a 3 year old notebook that still carries on for about 5 to 6h straight off the battery. I've also invested in it with an SSD and extra RAM and it's fine for my use right now. I'd really rather not just dump it. Now... its screen is dead and I'm looking for replacement parts, but it turns out to be a kind of expensive display - AUO B140XW02 14" LED. US$ 125 was the cheapest I found here (Rio de Janeiro, Brazil). The official dealer sells them for US$ 175. I'm aware of How do I know if a LCD is compatible with my Laptop? and Can I replace a laptop screen with one of a different resolution? but the answers don't really help me in this case. Also Replacing an LCD screen in a laptop gives me a hint but I'm completely dumb at hardware so this is why i'm asking: Is it possible to buy an LCD or a lower grade LED screen with the same connectors? What characteristics should I look for to find out if a certain display is compatible?

    Read the article

  • cannot resolve DNS server's own domain name

    - by sims
    I have a DNS server (mega.dude - 123.123.123.123) running bind 9.4. When I: dig mega.dude I get no answer section. I have nameserver 123.123.123.123 in /etc/resolv.conf Here is my zone file: $TTL 1W @ IN SOA mega.dude. names.mega.dude. ( 2009081502 ; serial 3H ; refresh 15M ; retry 1W ; expiry 1D ) ; minimum NS ns1 NS ns2 MX 10 mail.mega.dude. A 123.123.123.123 @ A 123.123.123.123 ns1 A 123.123.123.123 ns2 A 123.123.123.123 www CNAME @ mail A 123.123.123.123 It didn't used to look like this. I read that it's evil to have an mx record pointing to a CNAME. So I changed that. Then I thought maybe that was also the case for NS. So I changed those too. Still no good. The ports are open. I can't figure it out. Oh by the way, all the other zones return fine. But not the servers own domain. So I know I'm doing something stupid. Thanks for your help all!

    Read the article

  • Cannot access firewalled jboss server from Internet Explorer

    - by Simon Gibbs
    I've produced a website for a client One Single Menu using JBoss and hosted it on Rackspace Cloud Servers running Ubuntu's Maverick Meerkat. Following advice, I esablished some iptables rule to protect jboss: iptables -I INPUT 1 -i lo -j ACCEPT iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -p tcp --dport 22 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080 iptables -I INPUT -p tcp --dport 8080 -j ACCEPT iptables -t nat -A OUTPUT -o lo -p tcp --dport 80 -j REDIRECT --to-port 8080 iptables -A INPUT -j DROP Now, several versions of IE on several computers on at least two different ISPs cannot access the onesinglemenu.com. Curl from within the datacenter, Firefox, and Safari on the same ISPs can all access the server fine. I even tried IE and Firefox on the same computer and IE failed but Firefox worked. The error behaviour is that IE hangs on connecting without reporting an error, even after a minute or so. No page is displayed at all. I find it quite odd that I'm having a browser specific connection issue, but it appears to be the case. Help!

    Read the article

  • 5GHz vs 2.4GHz dual band router, max mbps

    - by Tallboy
    I've done a fair amount of reading before posting this but there are a few things still unclear. I just bought a Netgear WNDR3700 N600. I understand that 5GHz offers more channels, less interference because of more available channels, wont interfere with a microwave and so on, and also has a shorter range. Currently, my router is broadcasting both signals (for my iPhone on 2.4, and my computer on 5) But my question is What is the max speed of 5GHz in mbps? In the router settings, it allows me to set '300mbps', but I keep reading online that the max is 54. Is this true? I noticed when I set up the router the default for 2.4 was set to 300 and the default for 5 was set to 54, so I changed both to 300. Is this fine as well? I don't see why it wouldn't be maxed out for both by default. On the box it says max rate of 300+300 so I assume this is correct, but it's throttled down so the router isn't stressing in case you have something streaming media 24/7 and slowing the internet down too. What is the max range of 5GHz? my apt is 780 square feet, and the router is in the main living room.

    Read the article

  • Can I configure Thunderbird 3 to refresh the folder list for an Exchange IMAP account?

    - by Howiecamp
    Background: When used as an IMAP client against Gmail, Thunderbird 3 (may be the case in v2 also, not sure) will refresh it's list of folders (the folders correspond to Gmail labels) when you do "Download/Sync Now..." or restart the Thunderbird client. Any new folders (labels) created in Gmail will sync to the client and any folders moved/changed/deleted folders in Gmail will move/change/delete on the client as well. (Note: Thunderbird has the concept of "subscribing" to IMAP folders (assumingly allowing you to determine which folders you want, rather than bringing all of them down and dragging loads of data across the wire). When used against Gmail, Thunderbird appears to automatically subscribe to all folders (including when folders are newly created in Gmail), so this might be why the refresh is happening properly.) This behavior is what I want with Exchange. When using Thunderbird with Exchange (2007), the folder list doesn't refresh when folders are added/changed/deleted on the server and/or from a different mail client. When I look at the subscription options, some are checked and some are not (not sure why Thunderbird picked some and not others). And when I add new folders on the server and/or from another client, they never even appear in Thunderbird's list of folders, preventing me from subscribing to them.

    Read the article

  • dhclient requests filling memory?

    - by shanethehat
    Dammit Jim, I'm a web developer, not a sys admin. With that out of the way, my client's has a CentOS server (6.2) that is only serving a single Magento site (and the associated MySQL server) and it is frequently running out of memory, despite the site only currently being open to 5 users. I'm investigating the logs to try to figure out why the memory usage is so high, but I don't really know what I'm looking at. It seems that there are a lot of entries in /var/log/messages concerning DHCP requests, approximately one every 15 seconds, that look like this: Apr 7 14:23:06 s15940039 dhclient[815]: DHCPREQUEST on eth0 to 172.30.102.85 port 67 (xid=0x6b5cd2a7) Is this normal? I don't see anything else in here that I don't recognise, but then I'm not sure I'd know the problem if I did see it. 4 days ago the server ran out of memory completely and locked up, requiring a restart. The DHCP messages did not start up again for 23 hours, but then carried on as before. I have read this question which describes the same issue, but in my case a fresh DHCP lease does not ever seem to be issued. Is this something I should push back to the hosting provider, or have I not yet found the source of the memory problem?

    Read the article

  • Home server hard drive: 186k start-stop cycles in 325 days?

    - by j-g-faustus
    I set up a home server about a year ago, using Ubuntu server (10.04 LTS at the moment), four disks in RAID 5 for storage (WD Green 1.5 TB) and a laptop drive for the OS. Today the output of smartctl, a command line utility for checking the SMART attributes of a hard drive, tells me that the primary OS drive has had no less than 186,000 start-stop cycles in 325 days and may be nearing the end of its lifespan. The smartctl output is in "normalized values", in this case a number between 200 and 000, where 200 is "brand new" and 000 means "worn out". My disk gets 001. So I wonder what happened: 186k start/stop cycles in 7820 hours is about one start/stop per 2.5 minutes around the clock. This seems somewhat excessive for a computer that sees actual use once or twice per day. (The RAID disks are normal, averaging to one start/stop per day, as expected.) Does anyone have similar experiences, or pointers to what might be the issue here? Specifically I'd like to know Why the massive start/stop count? Do I have some sort of configuration issue? Could there be a background service that is causing trouble? Could having a laptop disk as the OS drive be part of the problem? Can anyone confirm or deny this? Here is the /etc/hdparm.conf configuration /dev/sda { apm = 127 spindown_time = 120 } and the most relevant parts of smartctl --attributes /dev/sda: smartctl version 5.38 [x86_64-unknown-linux-gnu] Copyright (C) 2002-8 Bruce Allen === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 001 001 000 Old_age Always - 185875 9 Power_On_Hours 0x0032 090 090 000 Old_age Always - 7820 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 109 193 Load_Cycle_Count 0x0032 118 118 000 Old_age Always - 246833 194 Temperature_Celsius 0x0022 107 098 000 Old_age Always - 36 As I generally prefer my drives to last more than a year, any advice is appreciated.

    Read the article

  • How to convert a raw disk image to a copy-on-write image based on another image for use with kvm and

    - by Jean-Paul Calderone
    I have a virtual Windows machine running on kvm. Presently it has a 90GB raw disk image. I would like to clone this VM without having to keep two copies of the 90GB raw disk image around. It seems like a good approach for doing this is to make two new qcow or qcow2 images based on the original. First I converted the raw image to a qcow2 image: qemu-img convert -O qcow2 basewindowsxp.img basewindowsxp.qcow2 Then I tried creating a new image backed by this: qemu-img create -F qcow2 -f qcow2 -b `pwd`/basewindowsxp.qcow2 windowsxp-1.qcow2 Then I used virt-manager to point the original VM at windowsxp-1.qcow2. However, when I try to start up the VM in this new configuration, virt-manager reports an error: Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/engine.py", line 588, in run_domain vm.startup() File "/usr/share/virt-manager/virtManager/domain.py", line 150, in startup self._backend.create() File "/usr/lib/python2.6/dist-packages/libvirt.py", line 300, in create if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self) libvirtError: internal error unable to start guest: qemu: could not open disk image /var/lib/libvirt/images/windowsxp-1.qcow2 The error suggests that the filename was misspecified or that the filesystem permissions are too restrictive, but neither of these is the case: $ ls -l /var/lib/libvirt/images/windowsxp-1.qcow2 -rwxrwxrwx 1 root root 262144 2010-05-27 08:32 /var/lib/libvirt/images/windowsxp-1.qcow2 Why won't virt-manager start this vm?

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • Authenticate VNC session with ConsolKit?

    - by lori
    I have a linux machine running Fedora 16 in a cupboard. It has no screen or keyboard. I connect to it using a combination of vnc and ssh. Recently, after an update, I have had issues with authentication on the machine. If I vnc to it, the kde desktop pops up an error dialog every few minutes saying Authorization failed. Failed to obtain authentication. If I plug in a USB drive it fails to mount, Dolphin reports an authentication issue again. I have had limited success finding the solution. AFAICT, it is an issue with ConsoleKit deeming me to be a non-local user so it prevents authentication. This is the output from ck-list-sessions: $ ck-list-sessions Session5: unix-user = '1000' realname = 'steve' seat = 'Seat6' session-type = '' active = FALSE x11-display = ':1' x11-display-device = '' display-device = '' remote-host-name = '' is-local = FALSE on-since = '2012-09-16T08:07:03.137011Z' login-session-id = '1' I have tried to update my .vnc/xstartup script to include ck-launch-session as follows: $ cat ~/.vnc/xstartup #!/bin/sh exec ck-launch-session vncconfig -iconic & unset SESSION_MANAGER unset DBUS_SESSION_BUS_ADDRESS export XKL_XMODMAP_DISABLE=1 OS=`uname -s` if [ $OS = 'Linux' ]; then case "$WINDOWMANAGER" in *gnome*) if [ -e /etc/SuSE-release ]; then PATH=$PATH:/opt/gnome/bin export PATH fi ;; esac fi if [ -x /etc/X11/xinit/xinitrc ]; then exec ck-launch-session /etc/X11/xinit/xinitrc fi if [ -f /etc/X11/xinit/xinitrc ]; then exec ck-launch-session sh /etc/X11/xinit/xinitrc fi [ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources exec ck-launch-session xsetroot -solid grey exec ck-launch-session xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" & exec ck-launch-session twm & This has not helped. How can I either authenticate myself to ConsoleKit, or trick it into believing I am a local user?

    Read the article

  • IIS6: How to troubleshoot a 404 error in an ASP.NET application?

    - by Tomalak
    I have an ASP.NET application on a Windows Server 2003/IIS6 that refuses to run for some reason (it's the Xerox Centre, if that info helps). It has been working flawlessly before though on this server. Now, all I get if I try to open the app homepage (http://some.intranet.server/XeroxCentreWareWeb/) is a "404 - File or directory not found" error. The app is configured to run in it's own app pool, which runs as Network Service. The Network Service account has read access to the configured directory. If I stop the app pool, I get the expected "Service Unavailable" message, meaning the app and its pool are wired correctly I tried to track down any file permission issues with procmon - nothing to be seen. There isn't even an access to the web app directory happening when the page loads. Interestingly, according to procmon, the web server accesses the 401-2 custom error file (Logon failed due to server configuration) first, but then decides to send the 404 down to the client. EDIT: The app runs with Windows-integrated authentication. Regular users have access to the app directory as well (I would have noticed file system "ACCESS DENIED" messages in procmon, if there had been any.) This makes me think that there is some kind of weird permission problem that occurs even before the application files are being accessed. I just have no idea where to look. I've tried to run the app pool as Local System for a test, but to no avail. What else could I check in this case?

    Read the article

  • Trying to delete directory with "rm -rf", but get message that it's not empty

    - by Ben Hocking
    I've tried deleting a directory using "rm -rf" and I'm getting the message "Directory not empty": Bens-MacBook-Pro:please benjaminhocking$ ls -lart empty_directory/ total 16 drwxr-xr-x 5 benjaminhocking staff 170 Aug 27 14:46 . drwxr-xr-x 3 benjaminhocking staff 102 Aug 27 15:28 .. Bens-MacBook-Pro:please benjaminhocking$ rm -rf empty_directory/ rm: empty_directory/: Directory not empty Bens-MacBook-Pro:please benjaminhocking$ rmdir empty_directory/ rmdir: empty_directory/: Directory not empty If I try the same thing using Finder (dragging the folder to the Trash), I get the message The operation can’t be completed because the item “empty_directory” is in use. I've tried doing xattr -d com.apple.quarantine, purely out of superstition, but it did no good. A probably important piece of context is that this directory was initially in a directory that should've been deleted by a "make clean" command I issued prior to Terminal locking up on me, after which a little over half of the other programs I had running also locked up, including Skype, and eventually the OS itself. I ended up having to reboot the computer by pressing and holding the power key. Edit to add: Another important piece of information I left off was that this was happening in an encrypted folder à la encfs. I was able to track down the corresponding folder in the encrypted side of things and delete it there. I still don't know why I couldn't do it from the decrypted side of things like I normally do. I'll leave this unanswered for now in case anyone has a good answer for that.

    Read the article

  • Enabling hardware acceleration and Xinerama for multi-monitor/multi-GPU in Linux

    - by mynameiscoffey
    My current setup is three monitors connected as follows (monitors listed from left to right): GPU0 (nVidia GTX 280): - Dell 2405FPW (1920x1200) - Dell U2410 (1920x1200) GPU1 (nVidia 210): - Dell 2405FPW (1920x1200) Works like a charm in Windows 7, not so much in Linux. I seem to only have three real options: Run all three monitors as a seperate X screen, I get hardware acceleration but as they are all independent X sessions I cannot move windows between them and can only have firefox open on one at any given time. Run the two on GPU0 in TwinView mode and have GPU1 as a seperate X screen. Same limitation as 1 but at least two monitors work together ok. I did have an issue where occasionally Linux saw both monitors on GPU0 as a single large monitor however. Enable Xinerama and have everything work as I want it to but hardware acceleration is gone and the display is Windows 95 style choppy. My ideal solution would be to have all screens working as they do under Xinerama without the limitation of having hardware acceleration disabled. I don't even care if that means rendering all three on GPU0 and somehow farming out the display of the third monitor to GPU1, whatever works. My question is this: is there any way to accomplish this? I don't feel like my use case is so out there that there shouldn't be at least some form of support (beyond the three limited options presented above), or is my best option going to be to just suck it up and pick up a better card to replace both that can handle three outputs by itself?

    Read the article

  • Managing multiple IMAP accounts in Thunderbird

    - by baritoneuk
    I've been using Thunderbird for years without issues with 20+ pop3 accounts. I'm moving over to imap which will enable me to keep copies of the emails locally and on the server whilst keeping everthing synchronised. However I'm looking for the best way to manage multiple imap accounts on Thunderbird. Currently I have a filter that copies all the emails into a central inbox and into seperate local folders. The reason for this is I go through my inbox daily and delete all emails that don't require any action. I move any emails that require action to my "action" imap account folder. This way I can syncronise all the emails that require action across multiple computers (and mobile devices). This technique is my implemantion of the GTD or Getting things Done philosophy. I also copy over each email into seperate local folders. The reason I do this is just in case any emails on the imap accounts get deleted, or something drastic happens on the server which means I lose all the emails. My business partner has access to some of these emails and still uses pop3 (with "leave copy on server" checked), but I know sometimes Thunderbird can still delete emails off the server sometimes. The problem with the above is that thunderbird gives me the dreaded error dialogue saying that the emails cannot be filtered due to another process. I find the folder list in Thunderbird hard to manage. Here is a screenshot of part of my folder list- as you can see it's a bit of a complicated list and not easy to manage: What would be the best way of me managing multiple imap accounts whilst allowing me to have copies put in a central folder and emails in local folders? It would be useful if people think this is necessary, as perhaps there is a betterway? How do people manage multiple imap accounts in a way that allows them to keep on top of actionable emails? I'd be interested in how others manage this. I've never used the Thunderbird-based client "Postbox", does this handle multiple imaps better?

    Read the article

  • Directory directive: AuthType None but still need an AuthProvider?

    - by Steffen Winkler
    For now I just need the server to let me download files from one specific folder (in my case I chose /opt/myFolder for that task) Distribution is Debian 6.0 *edit_start* Apache version is 2.4, according to their official documentation, the Order/Allow clauses are deprecated and should not be used anymore I'm an idiot: Apache version is 2.2. *edit_end* My directory directives in apache2.conf look like this: <IfModule dir_module> DirectoryIndex index.html index.htm index.php </IfModule> ServerRoot "/etc/apache2" DocumentRoot "/opt/myFolder" <Directory /> Options FollowSymLinks AuthType None AllowOverride None Require all denie </Directory> <Directory "/opt/myFolder/*"> Options FollowSymLinks MultiViews AllowOverride None AuthType None Require all allow </Directory> When I try to access a file inside that folder (http://myserver.de/aTestFile.zip) I get an Internal Server Error. Also Apache writes the following error into it's log: configuration error: couldn't check user. Check your authn provider!: /aTestFile.zip Why would I need an authn provider if I don't want any authentication? Also I hope someone can explain to me what kind of AuthenticationProvider I'd need for that. Everytime I search for those things I get pointed at people asking how to protect files/directories with passwords or restrict access to some IP addresses, which doesn't really help me. ok, since I've Apache version 2.2, here is the error I get when using the Order/Deny/Allow commands instead of AuthType/Require: Invalid command 'Order', perhaps misspelled or defined by a module not included in the server configuration.

    Read the article

  • Building a PC for Work and play? [closed]

    - by Derek Organ
    Ok, Its been a long time since I build my own PC so I'm looking to get back into it again and build a new one. First off budget is about €800 excluding the monitor and windows 7 licence and mouse. (just bought a new g500) I plan on using my computer for work, lots of applications open at once but none particularly excessive (photoshop being the most demanding, mostly coding tools) I also use it for some gaming, e.g. COD, Starcraft etc. One thing I do want to do eventually is get a really good monitor with hight resolution and maybe 27" so the graphics card needs to be able to make best use of that. So a few questions 1) Is the bottle neck in performance mostly still the harddrives? 2) Aren't most processors e.g. i5 etc even i3 so far a head of other bottlenecks it makes litte difference the higher you go. Isn't the Graphics card dealing with heavy graphics so what really slows because of a slow CPU? So from this my thinking is to get a SSD drive as my primary drive for OS etc and have loads of memory e.g. 6-8GB and a decent mid level graphics card? It doesn't seem at my level worth spending much on CPU and any other parts really. I basic parts off the top of my head Case, Motherboard CPU SSD Drive SATA Drive Power Supply Memory Cooling (fan?) Graphics Card Network Card Keyboard DVD drive Mouse Windows Monitor Am I missing anything? Any helpful tips or general education much appreciated. Thanks, Derek

    Read the article

  • Unable to remove Read-Only attribute from folder in Windows XP

    - by elcuco
    I have this directory which I cannot remove the read only attribute from. The computer is running XP SP2 (or SP3, not sure) and the directory sits in a NTFS file system. Looking into the web I found this: http://support.microsoft.com/kb/256614 which tells that if the directory is "customized" it's treated as a system folder and thus "read only". I don't think this is a scenario in my case, but anyway it's not helping, their recommendation is more or less: attr -r -s /d /s d:\data and this is not working for me. Any other ideas? More info: The directory is served to an HTTP server (wamp) and the directory is an SVN check out. What happens is that the web server cannot write files into the directory (imagechace from drupal is you are really interested). Edit 2: The original post claimed that the directory sits on a VFAT FS, however I booted Fedora 11 from livecd and the partition is marked as NTFS. Edit 3: I left the company which I worked on, on which this situation happened... so I cannot fully close this question. But things get even worse: I tested the "attr -r" answer I put, it did not work for me, and now the developer said that it worked for her. A nice WTF moment. Probably a reboot helped... Sorry for loosing details. If anyone has the same problem, and one of the answers helps him - please comment.

    Read the article

  • no mails routed to/from new Exchange 2010

    - by Michael
    I have an Exchange Server 2003 up and running for years. Now I am in the mid of transition to Exchange Server 2010, I already installed it, put the latest Servicepack on it and everything seems fine, BUT: Mails do not get delivered to MailBoxes on the new Exchange 2010. e.g. when I create a new mailbox on the old server, Emails in and out to/from it work like a charm. But as soon as I move it to the new server, emails get stuck. Noe delivered from outside or old mailboxes, not send out from the new server to enywhere. Sending between Mailboxes on the new Server of course is working. I can see the connectors between old and new Server in the Exchange 2003 Admin Tool, but I cannot find these nowhere on the new server. I have also setup sending connectors at the new server to send out mails directly, but that does not work. In all other areas, the servers are perfectly working together - moving mailboxes between, seeing each other etc. "just" they dont exchange (!) any emails - Any ideas what I missed? I also followed the hints from: Upgrading from Exchange 2003 to Exchange 2010, routing works in one direction only There Emails were transported at least in one direction, in my case they are not transported at all. Both my connectors are up and valid abd have the correct source/target shown on Get-RoutingGroupConnector | FL Kind regards Michael

    Read the article

  • Libvirt / QEmu Machine Fails and Refuses Restart Because of Memory Allocation Errors

    - by Elmar Weber
    I'm having a problem with libvirt. On a system restart all virtual machines (VMs) are started without a problem and keep running. Then at some point in time a set of machines shuts down according to their log. When I try to restart the machine, I'm getting an error that the memory allocation failed, although more than enough memory is free. server ~ # free total used free shared buffers cached Mem: 16176648 16025476 151172 0 285432 950300 -/+ buffers/cache: 14789744 1386904 Swap: 0 0 0 server ~ # virsh start zimbra error: Failed to start domain zimbra error: Unable to read from monitor: Connection reset by peer server ~ # tail -n 4 /var/log/libvirt/qemu/zimbra.log LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/bin:/usr/sbin:/sbin:/bin QEMU_AUDIO_DRV=none /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 3072 -smp 2,sockets=2,cores=1,threads=1 -name zimbra -uuid d05ddb7a-83c4-a77b-d8bc-a322648520cf -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/zimbra.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -drive file=/var/lib/libvirt/images/zimbra.img,if=none,id=drive-ide0-0-0,format=raw -device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1 -netdev tap,fd=19,id=hostnet0 -device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:21:a9:ad,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -usb -vnc 192.168.1.2:25 -k de -vga cirrus -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4 char device redirected to /dev/pts/2 Failed to allocate 3221225472 B: Cannot allocate memory 2012-07-06 08:42:56.076+0000: shutting down server ~ # uname -a Linux server 3.2.0-26-generic #41-Ubuntu SMP Thu Jun 14 17:49:24 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux The system is a Ubuntu 12.04 server. The problem seems to occurs since the last restart, which was due to a number of package upgrades and a kernel upgrade. I tried booting with the previous kernel, the problem persists. I was not able to pinpoint an exact event when the machines fail, they do it at nearly the same time. The last time a duplicity job was running, this was not always the case however. Any suggestions on how to debug this? Best regards, elm

    Read the article

< Previous Page | 643 644 645 646 647 648 649 650 651 652 653 654  | Next Page >