Search Results

Search found 23555 results on 943 pages for 'command timeout'.

Page 594/943 | < Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >

  • They Wrote The Book On It

    - by steve.diamond
    First of all, an apology to you all for my not posting this yesterday, when I should have. For those of you bloggers out there, you know the difference between "Save" and "Preview." But I temporarily forgot it. Nevertheless, while I'm not impressed with this mishap, I'm blown away by the initiative three of my colleagues have taken. Jeff Saenger, Tim Koehler, and Louis Peters, recently wrote a book, "Oracle CRM On Demand Deployment Guide." Not only that, they got this book PUBLISHED. These guys know their stuff. They have worked in the CRM industry for many years. And trust me, they command a lot of respect inside this organization. In the words of Louis Peters (who posted this verbiage yesterday on LinkedIn), "We've assembled all the best practices and lessons learned over the past six years working with CRM On Demand. The book covers a range of topics - working with SaaS-based applications, planning and executing a successful rollout, designing elegant and high-performing applications, and working effectively with Oracle. We even included several sample designs based on successful real-world deployments. Our main target audience is the CRM On Demand project team - sponsors, project managers, administrators, developers - really anyone planning, implementing or maintaining the application." Now these guys don't know it, but I'll be interviewing one of them and including audio excerpts of that conversation right here next Wednesday. In the meantime, if you want to learn more about successful CRM deployments in general, and working with Oracle CRM On Demand in particular, you should check out this book.

    Read the article

  • Path is too long

    - by kaleidoscope
    Bugged by the irritating "Path is too long after being fully qualified" error while running in the Development Fabric? The solution is pretty funny and not so obvious unfortunately. The culprit here is not your app, but the Development Fabric. The DevFab accumulates a lot of temporary junk comprising of local storage locations, cached binaries, configuration, diagnostics information and cached compiled web site content files over its lifetime. They are typically stored at C:\Users\<username>\AppData\Local\dftmp. The Azure Tools will periodically clean this up, but some time you have to play janitor and take the law in your hands ;). The csrun.exe has quite a few tricks up its sleeve. One of them is the ability to clean the development fabric's temporary junk accumulated over time. You can do this by  running the Azure command prompt with elevated privileges and running csrun.exe /devfabric:shutdown and then csrun.exe /devfabric:clean If the problem still persists then the application directory structure could indeed be too long. A workaround to this is changing the Development Fabric temporary directory to point to a shorter path. The temporary directory path can be addressed by an environment variable _CSRUN_STATE_DIRECTORY. You can try setting its value to something like "C:\WA" or "C:\A" this will reduce some 25+ characters from your path. Do not forget to close Visual Studio and expressly shutdown the dev fab with csrun.exe /devfabric:shutdown (Under elevated privileges of course). Source: http://geekswithblogs.net/IUnknown/archive/2010/02/03/no-more-path-is-too-long.aspx  :D   Sarang, K

    Read the article

  • nginx tmp file folder runing out of diskspace

    - by user1179459
    I get mysql diskspace error Can't create/write to file '/tmp/#sql_777_0.MYI' (Errcode: 28) mainly because my ngnix server is writing file into the tmp folder which doesn't get clean up.. i added this command as per instructions on the nginx manual to the crontab but doesn't seems to be doing the trick, (i don't understand what it does too) 0 */1 * * * /usr/sbin/tmpwatch -am 1 /tmp/nginx_client then i had to do this commands mannually cd /tmp/nginx_client find -name * | xargs rm i need to know what should i do to automate this clean up ? is there way to increase the /tmp/ - /var/tmp/ size without reformatting or doing any dangerous things ? Can i change the location of the MYSQL - TMP files ?

    Read the article

  • Remotely sync Time Machine drives

    - by Off Rhoden
    I have an Xserve that runs Time Machine to a local terabyte drive. I also connected my external terabyte drive for a time period and had Time Machine use it to establish the seed data. I plan to take my drive back home with me (out of state) and have the Xserve return to using its local drive for Time Machine. But when I get back home, is there a way to keep my external drive's copy of the Time Machine Backups folder in sync with the Backups folder back on the Xserve? I'm wanting a full copy of the history (makes an awesome remote backup). I've thought of using the unix command rsync. In fact, that's how I had been doing it but I was looking the compactness that Time Machine was able to achieve. Thanks.

    Read the article

  • ImageMagick failing to convert to JPG

    - by johnui
    Hi: We recently installed the latest version of ImageMagick onto our Linux server. I seem to be having issues performing the most basic of tasks. I am running this command line: /usr/bin/convert /location/to/source/design.ai /location/to/save/output.jpg Unfortunatly is saves design.jpg as an illustrator file (if I rename the file to output.ai it opens). Even if I do this: /usr/bin/convert /location/to/source/design.ai -rotate 90 /location/to/save/design.jpg It rotates the file and saves again as an illustrator document. This happens with all filetypes (e.g. png, bmp, etc...) It appears ImageMagick cannot figure out what I want it converted to and just saves as the same file type. Any ideas on fixing this? Regards: John

    Read the article

  • A dacpac limitation – Deploy dacpac wizard does not understand SqlCmd variables

    - by jamiet
    Since the release of SQL Server 2012 I have become a big fan of using dacpacs for deploying SQL Server databases (for reasons that I will explain some other day) and I chose to use a dacpac to distribute my recently announced utility sp_ssiscatalog (read: Introducing sp_ssiscatalog (v1.0.0.0)). Unfortunately if you read that blog post you may have taken note of the following: Ordinarily a dacpac can be deployed to a SQL Server from SSMS using the Deploy Dacpac wizard however in this case there is a limitation. Due to sp_ssiscatalog referring to objects in the SSIS Catalog (which it has to do of course) the dacpac contains a SqlCmd variable to store the name of the database that underpins the SSIS Catalog; unfortunately the Deploy Dacpac wizard in SSMS has a rather gaping limitation in that it cannot deploy dacpacs containing SqlCmd variables. I think it is worth calling out this limitation separately in this blog post because its a limitation that all dacpac users need to be aware of. If you try and deploy the dacpac containing sp_ssiscatalog using the wizard in SSMS then this is what you will see: TITLE: Microsoft SQL Server Management Studio ------------------------------ Could not deploy package. (Microsoft.SqlServer.Dac) ------------------------------ ADDITIONAL INFORMATION: Missing values for the following SqlCmd variables:SSISDB. (Microsoft.Data.Tools.Schema.Sql) ------------------------------ BUTTONS: OK ------------------------------ The message is quite correct. The SSDT DB project that I used to build this dacpac *does* have a SqlCmd variable in it called SSISDB: Quite simply, the Dac Deployment wizard in SSMS is not capable of deploying such dacpacs. Your only option for deploying such dacpacs is to use the command-line tool sqlpackage.exe. Generally I use sqlpackage.exe anyway (which is why it has taken me months to encounter the aforementioned problem) and have found it preferable to using a GUI-based wizard. Your mileage may vary. @Jamiet

    Read the article

  • Internet Explorer and Cookie Domains

    - by Rick Strahl
    I've been bitten by some nasty issues today in regards to using a domain cookie as part of my FormsAuthentication operations. In the app I'm currently working on we need to have single sign-on that spans multiple sub-domains (www.domain.com, store.domain.com, mail.domain.com etc.). That's what a domain cookie is meant for - when you set the cookie with a Domain value of the base domain the cookie stays valid for all sub-domains. I've been testing the app for quite a while and everything is working great. Finally I get around to checking the app with Internet Explorer and I start discovering some problems - specifically on my local machine using localhost. It appears that Internet Explorer (all versions) doesn't allow you to specify a domain of localhost, a local IP address or machine name. When you do, Internet Explorer simply ignores the cookie. In my last post I talked about some generic code I created to basically parse out the base domain from the current URL so a domain cookie would automatically used using this code:private void IssueAuthTicket(UserState userState, bool rememberMe) { FormsAuthenticationTicket ticket = new FormsAuthenticationTicket(1, userState.UserId, DateTime.Now, DateTime.Now.AddDays(10), rememberMe, userState.ToString()); string ticketString = FormsAuthentication.Encrypt(ticket); HttpCookie cookie = new HttpCookie(FormsAuthentication.FormsCookieName, ticketString); cookie.HttpOnly = true; if (rememberMe) cookie.Expires = DateTime.Now.AddDays(10); var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); } This code works fine on all browsers but Internet Explorer both locally and on full domains. And it also works fine for Internet Explorer with actual 'real' domains. However, this code fails silently for IE when the domain is localhost or any other local address. In that case Internet Explorer simply refuses to accept the cookie and fails to log in. Argh! The end result is that the solution above trying to automatically parse the base domain won't work as local addresses end up failing. Configuration Setting Given this screwed up state of affairs, the best solution to handle this is a configuration setting. Forms Authentication actually has a domain key that can be set for FormsAuthentication so that's natural choice for the storing the domain name: <authentication mode="Forms"> <forms loginUrl="~/Account/Login" name="gnc" domain="mydomain.com" slidingExpiration="true" timeout="30" xdt:Transform="Replace"/> </authentication> Although I'm not actually letting FormsAuth set my cookie directly I can still access the domain name from the static FormsAuthentication.CookieDomain property, by changing the domain assignment code to:if (!string.IsNullOrEmpty(FormsAuthentication.CookieDomain)) cookie.Domain = FormsAuthentication.CookieDomain; The key is to only set the domain when actually running on a full authority, and leaving the domain key blank on the local machine to avoid the local address debacle. Note if you want to see this fail with IE, set the domain to domain="localhost" and watch in Fiddler what happens. Logging Out When specifying a domain key for a login it's also vitally important that that same domain key is used when logging out. Forms Authentication will do this automatically for you when the domain is set and you use FormsAuthentication.SignOut(). If you use an explicit Cookie to manage your logins or other persistant value, make sure that when you log out you also specify the domain. IOW, the expiring cookie you set for a 'logout' should match the same settings - name, path, domain - as the cookie you used to set the value.HttpCookie cookie = new HttpCookie("gne", ""); cookie.Expires = DateTime.Now.AddDays(-5); // make sure we use the same logic to release cookie var domain = Request.Url.GetBaseDomain(); if (domain != Request.Url.DnsSafeHost) cookie.Domain = domain; HttpContext.Response.Cookies.Add(cookie); I managed to get my code to do what I needed it to, but man I'm getting so sick and tired of fixing IE only bugs. I spent most of the day today fixing a number of small IE layout bugs along with this issue which took a bit of time to trace down.© Rick Strahl, West Wind Technologies, 2005-2012Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • TTY Resolution in Xubuntu 9.10

    - by Zurahn
    I've exhausted my ability to search through Google for this, so I'm giving it a go here. What I'm trying to do is increase the resolution (or decrease the font size) in the TTY terminals. Xubuntu 9.10 uses GRUB2, and everywhere I can find directs me to edit the /etc/default/grub File in order to add vga=XXX to the GRUB_CMDLINE_LINUX value, and this simply doesn't work. Out of endless fiddling with the file, nothing ever seems to change. On my Netbook running an earlier version, I had success with this command dpkg-reconfigure console-setup But once again it yields no change. Got any ideas?

    Read the article

  • Subversion error: Repository moved permanently to please relocate

    - by Bart S.
    I've set up subversion and apache on my server. If I browse to it through my webbrowser it works fine (http://svn.host.com/reposname). However, if I do a checkout on my machine I get the following error: Command: Checkout from http://svn.host.com/reposname, revision HEAD, Fully recursive, Externals included Error: Repository moved permanently to 'http://svn.host.com/reposname/'; please relocate I checked apache's error log, but it doesn't say anything. My repositories are stored under: /var/www/svn/repos/ My website is stored under: /var/www/vhosts/x/... Here's the conf file for the subdomain: <Location /> Options +indexes DAV svn SVNParentPath /var/www/svn/repos/ AuthType Basic AuthName "Authorization Realm" AuthUserFile /var/www/svn/auth/svn.htpasswd Require valid-user </Location> Authentication works fine. Does anyone know what might be causing this?

    Read the article

  • how to fix "BusyBox v1.17.1 (Ubuntu 1:1.17.1-10ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands?"

    - by Joseph
    So I was using Ubuntu when suddenly the whole thing froze up and I had to reboot. And from that moment on, the system when it is starting up, prompts this little selection menu: GNU GRUB version 1.99~rc1-13ubuntu3 Ubuntu, with Linux 2.6.38-10-generic ubuntu, with Linux 2.6.38-10-generic (recovery mode) Previous Linux versions Memory test (memtest86+) Memory test (memtest86+, serial console 115200) I have chosen all of the available choices but all I get is another command line system that reads: BusyBox v1.17.1 (Ubuntu 1:1.17.1-10ubuntu1) built-in shell (ash) Enter 'help' for a list of built-in commands. (initramfs): And honestly I can't do anything with it. Does anyone have any idea of what is going on and how I can get Ubuntu to work again?

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. Any clue as to what went wrong?

    Read the article

  • Samba Server Make Multiple User Permissions Profiles

    - by Scriptonaut
    I have a Samba file server running, and I was wondering how I could make multiple user accounts that have different permissions. For example, at the moment I have a user, smbusr, but when I ssh to the share, I can read, write, execute, and even navigate out of the samba directory and do stuff on the actual computer. This is bad because I want to be able to give out my IP so friends/family can use the server, but I don't want them to be able to do just anything. I want to lock the user in the samba share directory(and all the sub directories). Eventually I would like several profiles such as (smbusr_R, smbusr_RW, smbguest_R, smbguest_RW). I also have a second question related to this, is SSH the best method to connect from other unix machines? What about VPN? Or simply mounting like this: mount -t ext3 -o user=username //ipaddr/share /mnt/mountpoint Is that mounting command above the same thing as a vpn? This is really confusing me. Thanks for the help guys, let me know if you need to see any files, or need anymore information.

    Read the article

  • FTP connection is aborted

    - by Conrad C
    I want to connect using FTP to my webpage hosted on ipages.com , but I always get this error on filezilla.: Status: Server does not support non-ASCII characters. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 550 PWD: Permission denied Error: Failed to retrieve directory listing Error: Disconnected from server: ECONNABORTED - Connection aborted It looks like the connection is established but then disconnects. Is it an issue with the host? I use the default port 21. The user-pass is working. And the ftp adress is ftp.mysite.com I tested the port 21 using netstat and I get 220 Ipage FTP Server Ready

    Read the article

  • libcrypto.so.0.9.8: could not read symbols: Invalid operation

    - by Doug
    Trying to make PHP 5.4.4 with various extensions (I know you can apt-get this, but I need to do it because I have a new installation of Apache 2.4.2 which isn't available via repos). However, I am stuck and I don't know what this error means. /usr/bin/ld: ext/curl/.libs/interface.o: undefined reference to symbol 'CRYPTO_set_id_callback@@OPENSSL_0.9.8' /usr/bin/ld: note: 'CRYPTO_set_id_callback@@OPENSSL_0.9.8' is defined in DSO /usr/lib/libcrypto.so.0.9.8 so try adding it to the linker command line /usr/lib/libcrypto.so.0.9.8: could not read symbols: Invalid operation collect2: ld returned 1 exit status make: *** [sapi/cli/php] Error 1

    Read the article

  • Running Java from a Windows batch file causes the batch file to stop

    - by jjkparker
    When I run Java from a Windows .cmd file (Vista 32-bit here), the Java command causes the batch file to stop executing additional commands. For example, this is a simple test.cmd file: java java This should cause Java to print its help message twice. However when I run it in cmd.exe, I get this: C:\>test C:\>java Usage: java [-options] class [args...] (to execute a class) or java [-options] -jar jarfile [args...] (to execute a jar file) where options include: -client to select the "client" VM -server to select the "server" VM ... C:\> The batch file simply exits when Java exits. What's going on here?

    Read the article

  • Error on `gksu nautilus` because Nautilus cannot create folder "/root/.config/nautilus"

    - by luciehn
    I have a problem executing nautilus in root mode on a fresh installation. I just installed Ubuntu 12.04, and the first thing I did after first boot was run the command gksudo nautilus, I've got this error message: Nautilus could not create the required folder "/root/.config/nautilus". Before running Nautilus, please create the following folder, or set permissions such that Nautilus can create it. I am triple booting Windows 7, Fedora 17 and Ubuntu 12.04. My partition configuration is this: sda1 --> (ntfs) Windows 7 boot partition sda2 --> (ntfs) Windows 7 sda3 --> (ext4) Fedora 17 /boot partition sda4 --> Extended partition sda5 --> LVM Fedora 17, with 3 partitions inside (/, /home and swap) sda6 --> (ext4) Ubuntu 12.04 /boot partition sda7 --> (ext4) Ubuntu 12.04 swap partition sda8 --> (ext4) Ubuntu 12.04 / partition sda9 --> (ext4) Ubuntu 12.04 /home partition The MBR is using Windows, so is this one which is controlling the machine boot menu. Looks like Nautilus does not have permission to write in /root/.config, but it should right? I prefer asking before doing nothing wrong. Any ideas?

    Read the article

  • Varnish VCL Reload Fails After Adding Second Backend

    - by Andy
    I have been running Varnish on my production server successfully for several weeks now. Now I'm trying to configure Varnish to use a second backend for certain requests. My original working VCL (/etc/varnish/default.vcl) begins like this: backend default { .host = "127.0.0.1"; .port = "8080"; } ...rest of VCL... And I'm changing it to: backend default { .host = "127.0.0.1"; .port = "8080"; } backend backend2 { .host = "12.34.56.78"; .port = "80"; } ...rest of VCL... When I reload the VCL file, I get the following: Command failed with error code 106 Failed to reload /etc/varnish/default.vcl. Any idea what the error could be, or how I can get more information on the problem?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • MySQL equivalent to .pgpass, or automatic authentication in a cron job for mySQL

    - by Ibrahim
    I'm writing a bash script to back up my databases. Most are postgresql, and in postgres there's a way to avoid having to authenticate by creating a ~/.pgpass file which contains the postgres password. I put this in root's home directory and made it chmod 0600, so that root could dump the postgres databases without having to authenticate. Now I want to do something similar for mysql, although I only have one mysql database. How can I do this? I don't want to specify the password on the command line for mysqldump because this is part of a script that might be somewhat visible to other users. Is there a better way (i.e. built in to mysql) to do this than make a file that only root can read and then read that to get the mysql password, and then use that in the bash script as a variable?

    Read the article

  • Routing all Traffic through OpenVPN Tunnel

    - by Filip Ekberg
    I have installed OpenVPN server on Archlinux and am now using OpenVPN GUI on Windows 7, I can talk to other computers connected through the VPN but I have not yet figured out how to route all traffic through the tunnel. How do I do this? I figured I need to do it with route ( cmd command ) but I think i need some pointers here. I've followed the OpenVPN HowTo on the matter but that doesn't work, it simply doesn't push the "force the client to go through this gateway"-option. And changing from OpenVPN to a PPTP / IPSec alternative is not an option at the moment.

    Read the article

  • disparity between `top`'s given CPU % and process CPU usage total

    - by intuited
    I've noticed that there are sometimes (large) differences between the reported total CPU usage and a summation of the per-process CPU utilization given by apps like top and wmtop. As an example: I recently ran a git filter-branch --index-filter on a fairly large repo, with the index-filter command piping git ls-files through a grep filter and into xargs git rm --cached. This took a few minutes to run; while it was going I noticed that both wmtop and top were displaying a high (above 50% on my 2-core machine) total CPU usage, but that neither showed any individual processes which were using a significant amount of CPU time. Are some processes not shown in the process list? What sorts of processes are these, and is there a way to find out how much CPU time they are using?

    Read the article

  • Robocopy permission denied

    - by Edoode
    Robocopy is preinstalled with Windows 7. I've used it many times in the past. I tried to copy a folder to a remote share with robocopy c:\source "\\server\share\path" /s /r:2 /w:2` As a result I get permission denied. Using explorer I can copy files to this share. I've opened a command prompt with administrator permissions with the same result. The share is read/write for public. [EDIT] I've successfully mapped a driveletter to the share, but robocopy still fails EDIT I've added the /B switch without success. The exact error is: 2009/09/26 20:43:14 ERROR 5 (0x00000005) Accessing Destination Directory \drobo \Drobo\fotos__NEW\Ericsson\

    Read the article

  • Software to Monitor the Stability of Internet Connection

    - by Ngu Soon Hui
    Thanks to the excellent internet connection service offered by one of the best ISP in the world, the internet connection in my area is very, very unstable. I can connect some of the time, but MOST of the time the connection will just drop off ( with the error message unable to resolve host) and after a few minutes, it will resume back. If I ping the domain name directly (i.e., ping www.google.com -t in cmd command), I will get a cannot ping message. Because of the flickery nature of the connection, it's pretty hard to prove to the support staff that internet connection is unstable. So I am thinking about using one software to record down the connection situation, so that I can present to the technical staff and make sure that they have no excuse not to fix my problem. Any such software available? Edit: Of course, such software should not record my browsing habit, and must be able to monitor and record the internet connection condition even when I am not online.

    Read the article

  • Thunderbird compact is taking forever

    - by mulllhausen
    One day I came in to work and found that our development server - a Ubuntu box had a full hard disk. I did a bit of investigation using the du command and it seems like mozilla thunderbird is the major culprit. After burning off some backups, the disk was left at 94%: $ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 895G 791G 59G 94% / none 4.0G 300K 4.0G 1% /dev none 4.0G 1.4M 4.0G 1% /dev/shm none 4.0G 140K 4.0G 1% /var/run none 4.0G 0 4.0G 0% /var/lock none 4.0G 0 4.0G 0% /lib/init/rw $ cd $ du -ch | grep [0-9]G 666G ./.thunderbird/ccsmcruu.default/ImapMail/mail.adofms.com.au 666G ./.thunderbird/ccsmcruu.default/ImapMail 667G ./.thunderbird/ccsmcruu.default 667G ./.thunderbird 2.2G ./.VirtualBox/Machines/iBike/Snapshots 2.2G ./.VirtualBox/Machines/iBike 2.2G ./.VirtualBox/Machines 2.2G ./.VirtualBox 670G . 670G total I did some reading and found that Mozilla Thunderbird does not compact files by default - i.e. all of the old emails that were sent to trash are still kept. One of the mailboxes used to get a lot of spam so I guess this accounts for the 667GB. I opened up Thunderbird to see how much space the inbox actually takes up and it turns out to be approximately 500MB - over 1000 times less than the stuff that has not been compacted over the years. So i right clicked on the inbox directory in the tree on the left of Thunderbird and selected 'compact'. I left it for about 12hours but even after that it still said 'compacting folder' on the status bar. I don't use Thunderbird on this PC - it belonged to a colleague who has left the company, however I do occasionally need to look through the inbox for references to the project I am working on, so deleting all traces of Thunderbird is not an option. My question is - is there any way I can monitor the progress of Thunderbird's compacting function? I would really like to know how long it is going to take. Also is there any way I can speed up the compacting process?

    Read the article

  • GConf error and gnome does not load properly in RHEL 5.3

    - by Tim
    Hello, I am using Red Hat Enterprise Linux 5.3 . I created a user oracle on the system, using the following command useradd -g oinstall -G dba,oper -d /home/oracle oracle Now, when i try to login as the user oracle, GNOME does not load properly and i get popup box error message like the following GConf error:Failed to contact configuration server;some possible causes are that you need to enable TCP/IP for ORBit,or your have NFS locks due to a system crash.(Details-/:IOR file'/tmp/gcofd-cheetahman/tock/ior' not opened successfully,no gconfd located:Permission denied 2: IOR file /tmp/gconfd-cheetahman/lock/ior not opened succesfully no gconfd located: Permission denied) Any way to fix this ? Thank You

    Read the article

< Previous Page | 590 591 592 593 594 595 596 597 598 599 600 601  | Next Page >