Search Results

Search found 20785 results on 832 pages for 'idea'.

Page 605/832 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • BackupExec 12 + RALUS - VERY slow backups

    - by LVDave
    We use Backup Exec 12 and the Remote Agent for Linux/Unix Servers (RALUS) to backup a large RHEL5 system. For various reasons we need to do a daily working set job. These working-set jobs run abysmally slow. The link between the target machine and the BE server is gigabit, and any other type of job runs 1-3GB/min. These working-set jobs start out at perhaps 40MB/min and over the course of the backup job slowly drops down so low that the BE job rate display in the "current jobs" goes blank.. Since we usually are only doing changed-files for one day, the job is usually small and finishes overnight and we don't worry abotu the slowness, but we had some issues with the backup server, and missed about 6 days of fairly heavy work on the Linux box, so this working-set job will be a doozy.. We have support with Symantec, and I've pestered them a lot about this, they've had me run RALUS in debug mode, sent them that log and a VXgather from the BE host and they had no fix/workaround.. To give an idea, I have the mentioned working-set job running for the last 3 1/2 hours and it's backed up just under 10MEGAbytes.... I'm posting this here to see if anybody in the "real world" has seen this/and/or has any ideas what might be causing these abysmally slow jobs, since Symantec seems to be clueless...

    Read the article

  • aufs user permissions

    - by user56395
    Anyone know why this doesn't work? Is this user error, AUFS feature or bug maybe: rac@tecraS1:~/tmp$ mkdir orig tmp au rac@tecraS1:~/tmp$ sudo mount -t tmpfs none tmp rac@tecraS1:~/tmp$ sudo chown -R rac tmp rac@tecraS1:~/tmp$ echo hello > orig/hello rac@tecraS1:~/tmp$ sudo mount -t aufs -o br=tmp:orig none au rac@tecraS1:~/tmp$ ls -al au total 8 drwxrwxrwt 4 rac root 100 2011-01-06 13:53 . drwxr-xr-x 5 rac rac 4096 2011-01-06 13:52 .. -rw-r--r-- 1 rac rac 6 2011-01-06 13:53 hello rac@tecraS1:~/tmp$ rm au/hello rm: cannot remove `au/hello': Operation not permitted rac@tecraS1:~/tmp$ Seems the aufs files were created as root and user has no access to them: rac@tecraS1:~/tmp$ sudo rm au/hello rac@tecraS1:~/tmp$ ls -al tmp total 4 drwxrwxrwt 4 rac root 120 2011-01-06 13:53 . drwxr-xr-x 5 rac rac 4096 2011-01-06 13:52 .. -r--r--r-- 2 root root 0 2011-01-06 13:53 .wh.hello -r--r--r-- 2 root root 0 2011-01-06 13:53 .wh..wh.aufs drwx------ 2 root root 40 2011-01-06 13:53 .wh..wh.orph drwx------ 2 root root 40 2011-01-06 13:53 .wh..wh.plnk rac@tecraS1:~/tmp$ OS is the latest Lucid with 2.6.35-23 stock kernel. No idea about aufs version. Using sudo chown -R rac tmp/.wh* fixes the problem. Thanks for looking.

    Read the article

  • Difficulty restoring a differential backup in SQL Server, 2 media families are expected or no files are ready for rollforward

    - by digiguru
    I have sql backups copied from server A to server B on a nightly basis. We want to move the sql server from server A to server B without much downtime, but the files are very large. I assumed that performing a differential backup and restore would solve the problem with the databases. Copy full backup from server A to copy to server B (10+gb) Open SQL Server Managment Studio on server B Right mouse on databases Restore Database Type in the new DB-name Choose "From Device" and browse to the backup file Click Okay. This is now resorting the original "full" backup. Test new db with dev application - everything works :) On original database rightmouse on DB Tasks Backup... Backup Type = Differential, Backup to disk, add a new file, and remove the old one (it needs to be a small file to transfer for the smallest amount of outage) Copy the diff backup onto the new db Right mouse on DB Tasks Restore Database This is where I get stuck. If I add both the new differential file, and the original backup to the restore process I get an error The media loaded on "M:\path\to\backup\full.bak" is formatted to support 1 media families, but 2 media families are expected according to the backup device specification. RESTORE HEADERONLY is terminating abnormally. But if I try to restore using just the differential file I get System.Data.SqlClient.SqlError: The log or differential backup cannot be restored because no files are ready to rollforward. (Microsoft.SqlServer.Smo) Any idea how to do it? Is there a better way of restoring backups with limited downtime?

    Read the article

  • Error getting PAM / Linux integrated with Active Directory

    - by topper
    I'm trying to add a Linux server to a network which is controlled by AD. The aim is that users of the server will be able to authenticate against the AD domain. I have Kerberos working, but NSS / PAM are more problematic. I'm trying to debug with a simple command such as the following, please see the error. Can anyone assist me to debug? root@antonyg04:~# ldapsearch -H ldap://raadc04.corp.MUNGED.com/ -x -D "cn=MUNGED,ou=Users,dc=corp,dc=MUNGED,dc=com" -W uid=MUNGED Enter LDAP Password: ldap_bind: Invalid credentials (49) additional info: 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece I have had to munge some details, but I can tell you that cn=MUNGED is my username for logging into the AD domain, and the password that I typed was the password for said domain. I don't know why it says "Invalid credentials", and the rest of the error is so cryptic, I have no idea. Is my approach somehow flawed? Is my DN obviously wrong? How can I confirm the correct DN? There was a tool online but I can't find it. NB I have no access to the AD Server for administration or configuration.

    Read the article

  • Internet connection problem,ping ok , but outlook and browsers dont work

    - by Ashian
    Hi, From some days ago I have a big problem on my laptop( run windows xp sp3) When I connect to internet I can ping web sites but when try to browse them some times it work correctly and some times the connection to server intrupted and I have to refresh the page several times. in this case browser show a connection problem immediatly after I click on address bar or a link on page( wihtout any try to connect to server) I use FireFox and opera and both of them have this problem. try another ISP and still I have this problem. I didnt use any proxy server and check the proxy setting. In this case Outlook also can't connect to mail server. this problem anfter some time or after restart windows have been fixed for a while. I check for virus and can't find anything. Is there any idea how can I fix it? UPDATE: Thanks for your responses. I test them , also I use Open DNS setting and that dosent help me. last night I see that my local web application ( such as Adsl modem config web site , and sites that I set up on windows xo IIS ) aslo can't open and Internal Communication error apears ( Opera Message) that didnt relate to DNS settings or Internet connection.

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • LighTPD and PHP not working if outside of LightTPD folder

    - by Marco83
    I need to set up a simple web server with PHP on Windows XP that a number of different people will use for local testing. I'm using LightTPD 1.4.30-4-IPv6-Win32-SSL and PHP 5.2. So far I've created this folder structure: tools/ LightTPD/ htdocs/ PHP/ I set up PHP as CGI and the document root as server_root + "/htdocs". It works fine (well, it's slow but I don't want to bother with FastCGI for now :) ). My problem is when I try to put the htdocs outside of LightTPD folder, like this: htdocs/ tools/ LightTPD/ PHP/ I update the document root to server_root + "/../../htdocs" and while static HTML pages work fine, PHP pages stop working (they return a "No input file specified"). I literally just change the document root, I didn't change anything in the php.ini or anywhere else. Please also note that I left all doc_root, user_dir and cgi.force_redirect to the default values in php.ini, and it works when htdocs is inside LightTPD, but not when I move it ouside. Any idea of why it's breaking?? Here's my lightTPD.conf: server.modules = ( "mod_access", "mod_accesslog", "mod_alias", "mod_cgi", "mod_status", ) include "variables.conf" include "mimetype.conf" # THIS WORKS server.document-root = server_root + "/htdocs" # THIS DOESN'T #server.document-root = server_root + "/../../htdocs" server.upload-dirs = ( temp_dir ) index-file.names = ( "index.php", "index.pl", "index.cgi", "index.cml", "index.html", "index.htm", "default.htm" ) server.event-handler = "libev" url.access-deny = ( "~", ".inc" ) $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } static-file.exclude-extensions = ( ".php", ".pl", ".cgi" ) server.errorlog = server_root + "/logs/error.log" ######### Options that are good to be but not neccesary to be changed ####### dir-listing.activate = "enable" #### CGI module cgi.assign = ( ".php" => server_root + "/../PHP/php-cgi.exe" ) status.status-url = "/server-status" status.config-url = "/server-config"

    Read the article

  • Acer Aspire Touchscreen Not Responding

    - by Jerry
    I have an Acer Aspire z3101 all-in-one Touch screen running Windows 7 Home Premium. I am unable to get the screen to respond to my touch. I have checked the system and all drivers are ok according to comp. Using the settings to setup the pen and touch and to calibrate does not work. My touch brings no response at all. I had reinstalled the applications - Windows Touch Pack and Acer Touch Portal which didn't help. Today I done a complete reinstall returning comp back to factory settings and still there is no response. At first I thought that perhaps a driver or file was missing but now doubt that due to the reinstall not helping. Have you any idea what could be causing this problem? I am now at the stage of pulling my hair out and haven't had too much help from Acer. I have had the computer for just over a year now and so it is no longer under guarantee and this has me very worried due to my being unemployed. Any help is much appreciated. Thank you.

    Read the article

  • Fix stubborn 'Setting locale failed.'

    - by user60129
    I have a very stubborn, well-known locale error on Ubuntu 9.10: perl: warning: Setting locale failed. perl: warning: Please check that your locale settings: LANGUAGE = (unset), LC_ALL = (unset), LC_TIME = "custom.UTF-8", LANG = "en_US.UTF-8" Tried the following: Added LANG=en_US.UTF-8 and LC_ALL=en_US.UTF-8 to /etc/environment Run apt-get install --reinstall locales (error: perl: warning: Falling back to the standard locale ("C"). /usr/bin/mandb: can't set the locale; make sure $LC_* and $LANG are correct) Run sudo dpkg-reconfigure locales. Result: Cannot set LC_ALL to default locale: No such file or directory, and then updates locales all locales including en_US.UTF-8 sudo locale-gen updates all locales successfully, including en_US.UTF-8 sudo locale-gen un_US en_US.UTF-8 gives no error nor other output In /etc/default/locale it says LANG="en_US.UTF-8" echo $LANG gives en_US.UTF-8 /var/lib/locales/supported.d/local says en_US.UTF-8 UTF-8 locale -a gives me: C en_AG en_AU.utf8 en_BW.utf8 en_CA.utf8 en_DK.utf8 en_GB.utf8 en_HK.utf8 en_IE.utf8 en_IN en_NG en_NZ.utf8 en_PH.utf8 en_SG.utf8 en_US.utf8 en_ZA.utf8 en_ZW.utf8 POSIX So well... I am pretty much out of options I can think of. Anybody any idea?? Thanks!

    Read the article

  • Ubuntu: Network connection seems to fail after some time

    - by chrischu
    I just bought a Shuttle XS-35 barebone mini-PC and put a 1 TB WD hard drive and 2 Gigs of RAM into it and installed Ubuntu onto it. The machine will post as a media server (streaming videos to my PS3) and as a webserver for some small private projects. Now I wanted to copy my videos from my Windows 7 machine to the Ubuntu machine and therefore created a Samba share on the Ubuntu machine. I tried copying the files with the standard Windows copy function and with SyncToy but after some time (sometimes 5 copied files, sometimes 120 copied files) the Samba share just disappears. When that happens I can't reach the internet from the Ubuntu machine although the network connection still seems to be fine (IP still there etc.). Between the machines lies a LinkSys router. When I try to ping my router (after the connection doesn't work anymore) from the Ubuntu machine only a very small subset of the packages actually get there (something around 20%). When I restart the Ubuntu machine everything seems to work normal again. I have no idea where the problem lies here. Does anybody have a clue? Thanks in advance!

    Read the article

  • Redmine does not return the web page

    - by m0skit0
    I migrated a Redmine installation from an Ubuntu machine to a Debian one (both 32-bits), and now for some reason, for some users it doesn't return the page but only a 200 OK message. Here is the flow (from Wireshark): GET /issues/142 HTTP/1.1 Host: debian:3000 User-Agent: Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:17.0) Gecko/20100101 Firefox/17.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Connection: keep-alive Cookie: _redmine_session=BAh7DCIQX2NzcmZfdG9rZW4iMStIM1RBNTlNelZVUXlUazgrR1pUNGUvNGdEbytUZzRyMVFSUnBvNGhlSDg9Ihd0aW1lbG9nX2luZGV4X3NvcnQiEnNwZW50X29uOmRlc2MiD3Nlc3Npb25faWQiJThiMDk0MzVhOTEzYTI0MzVjOGEzYTRmNDU0NzcwMTAwIgx1c2VyX2lkaQoiFmlzc3Vlc19pbmRleF9zb3J0IgxpZDpkZXNjIg1wZXJfcGFnZWlpIgpxdWVyeXsHOg9wcm9qZWN0X2lkaQc6B2lkaQo%3D--8588c221c0642a12f396239455fb702aec14c9c9; my_wiki_session=f70ae11e1c533c86f0e039d63cf3f69c; my_wikiUserID=1; my_wikiUserName=Yasin Cache-Control: max-age=0 HTTP/1.1 200 OK Connection: Keep-Alive Date: Wed, 12 Dec 2012 16:30:16 GMT Server: WEBrick/1.3.1 (Ruby/1.8.7/2010-08-16) Content-Length: 0 As you can see, I get nothing from the server. This is mostly random because this blank page happens sometimes for some users, and for other users it almost never returns the page... I'm absolutely lost here. Any idea about what can be the cause? Thanks in advance!

    Read the article

  • Amavisd-new(2.6.4-3) failing to do "lookup_sql_dsn" when large number of emails are need to be accessed

    - by sandip
    Amavis is failing to do sql lookup when large number of emails are sent to amavis. Its throwing out error after scanning 40 to 50 email. It shows error like. (!!)TROUBLE in process_request: sql exec: err=7, 57P01,DBD::Pg::st bind_param failed:FATAL: terminating connection due to administrator command\nSSL connection has been closed unexpectedly at (eval 103) line 164, <GEN50> line 5. at (eval 104) line 280, <GEN50> line 5. As soon as this error appears in the logs, Amavis stops and port 10024 is closed. Thinking it to an error due to ssl connection in the database(postgresql-8.4), i had stopped ssl in postgres, but it was of no use. I have tried to configure amavis on another server, but i got the same error again. This happening on a production server, So i am not being able to scan emails as per user settings. Anybody have any idea, what may be the source of this error ?? Please help. Thanks in advance

    Read the article

  • Server 2012 - transparent SMB failover without shared discs, possible?

    - by TomTom
    here is the scenario - there is a small set (200gb around) of data that I HAVE to keep available. Those are basically shared VHD images that serve as master images for a lot of our VM's - they then run in differential discs off those. The whole set is "mostly read only". In more detail: A file that IS there and IS used will NEVER change. I may delete files (when absolutely not in use) and add new files, but a file that is there once gets read protection set and that it is until it is retired. Obviously, I need as much uptime as possible. SO FAR we run that by having this directory local on every Hyper-V server. Now I think moving this into our storage fabric. Due to the "it HAS to be there" I pretty much want a share nothing architecture. DFS would be perfect for this - a file never changes, so replication would work nicely. Folders could be replicated to a number of servers, all would reference them from there. Now, that hyper-V supports SMB that could be a good idea to isolate these on a number of servers - we try to move into a scenario where the storage is more centralized. Server 2012 supports always on shares, but it seems that this only works with a clustered disc behind. Is there any way around this for read only file stores? All documentation points to stuff like a shared JBOD - but that would leave me open for file system corruption. I really plan to go quite separately here, vertically - 2 servers, both with SSD only for this, both with their own 2000W separate USV, both with enough bandwidth to handle everything thrown at them (note to everyone tinking this is 10G - this would be SLOW and EXPENSIVE compared to a nice Infiniband backbone). The real crux is that this is an edge case obviously - as the files are read only once in use.

    Read the article

  • Recover badly recorded DVDs

    - by CesarGon
    A few years ago (2003-2005) I bought a Sony USB external DVD recorder for my Dell laptop and I used it to burn a lot of discs. Much later, when I tried to use one of these discs, I realised that I could not read it. The disc behaved as if it was scratched or dirty. I tried on a couple of different DVD drives but got the same effect. Sadly, all the discs that I burnt with that recorder suffer from the same problem. Edit. When I read one of these discs with ImgBurn, I get lots of unrecovered read errors in multiple sectors, even at 1x speed. The sectors that cause read errors seem to be quite random; it's not always the same one. I have no idea what could be wrong with the discs. I doubt that they are scratched or dirty; it would be too much of a coincidence that all the discs that I burnt with that recorder got damaged at the same time. Also, they don't show any physical defects. Is there any way to diagnose what the problem is and, hopefully, recover the contents of the discs? Many thanks.

    Read the article

  • SSD not detected on boot up running windows 7, with installed blank hdd

    - by Matt. G
    I have recently built a PC for a friend, after the original system build, which included a 60GB primary SSD and a secondary 1TB HDD. I kept getting blue screens of death and kernel power errors, after investigation it was revealed that a faulty power cable and insufficient thermal paste provided with the included heat sink was the cause. This resolved the problem but after 3 months I received a phone call saying that the PC was not starting at the point of loading the operating system, with an NTLDR error. I had an idea of the cause, and after the user removed the HDD the computer started up with no issues, then I asked him to power off and reattach the HDD, and this completely resolved the issue; beforehand even restarting would not fix it. He does not have a surge protector and I thought that maybe some registry corruption had occurred due to a power surge, this might be a stupid answer though. Any ideas to what occurred with the machine would be most appreciated. No other issues have been found since the initial fault. The PC uses Windows 7 Home Premium installed on the SSD.

    Read the article

  • How to show what will be updated next pull?

    - by ???
    In SVN, doing svn update will show a list of full paths with a status prefix: $ svn update M foo/bar U another/bar Revision 123 I need to get this update list to do some post-process work. After I have transferred the SVN repository to Git, I can't find a way to get the update list: $ git pull Updating 9607ca4..61584c3 Fast-forward .gitignore | 1 + uni/.gitignore | 1 + uni/package/cooldeb/.gitignore | 1 + uni/package/cooldeb/Makefile.am | 2 +- uni/package/cooldeb/VERSION.av | 10 +- uni/package/cooldeb/cideb | 10 +- uni/package/cooldeb/cooldeb.sh | 2 +- uni/package/cooldeb/newdeb | 53 +++- ...update-and-deb-redist => update-and-deb-redist} | 5 +- uni/utils/2tree/{list2tree => 2tree} | 12 +- uni/utils/2tree/ChangeLog | 4 +- uni/utils/2tree/Makefile.am | 2 +- I can translate the Git pull status list to SVN's format: M .gitignore M uni/.gitignore M uni/package/cooldeb/.gitignore M uni/package/cooldeb/Makefile.am M uni/package/cooldeb/VERSION.av M uni/package/cooldeb/cideb M uni/package/cooldeb/cooldeb.sh M uni/package/cooldeb/newdeb M ...update-and-deb-redist => update-and-deb-redist} M uni/utils/2tree/{list2tree => 2tree} M uni/utils/2tree/ChangeLog M uni/utils/2tree/Makefile.am However, some entries having long path names are abbreviated, such as uni/package/cooldeb/update-and-deb-redist is abbreviated to ...update-and-deb-redist. I deem I can do with Git directly, maybe I can configure git pull's output in special format. Any idea?

    Read the article

  • Client cannot access my IIS7 web server

    - by Soccerwiz
    I have a Windows 2008 web server on running IIS 7 with about 25 websites. One of those sites is an SaaS application that is accessed constantly throughout the day. However, one particular client keeps getting blocked from my server. They will be using the service, and then all of a sudden they cannot access the program, or any other site on the server. The entire office of 4 users is blocked from accessing anything on the web server. A trace route reveals they get all the way to the server before they are blocked. However, they can access a linux server that is a different VM with a different IP on the same physical server. Also, when they are blocked from their office, they can still access the site from their mobile phone or local Starbucks. They can also occasionally reset the router and gain access to the web server again as they are on a dynamic IP address. I checked IIS and allow all IPs to access the server. There is nothing in the logs the says anything about a user being banned. I really have no idea what is causing this? Could it be a virus on their end? I have even moved the SaaS to a completely new server in a different location, and they were working fine for about a month, and then the problem started occurring again. Are there any hidden blacklists in IIS? Or is it a routing issue on their end?

    Read the article

  • suPHP not working

    - by amarc
    OS: Ubuntu 10.04 etc/suphp/suphp.conf: [global] ;Path to logfile logfile=/var/log/suphp/suphp.log ;Loglevel loglevel=info ;User Apache is running as webserver_user=www-data ;Path all scripts have to be in docroot=/home ;Path to chroot() to before executing script ;chroot=/mychroot ; Security options allow_file_group_writeable=false allow_file_others_writeable=false allow_directory_group_writeable=false allow_directory_others_writeable=false ;Check wheter script is within DOCUMENT_ROOT check_vhost_docroot=true ;Send minor error messages to browser errors_to_browser=false ;PATH environment variable env_path=/bin:/usr/bin ;Umask to set, specify in octal notation umask=0077 ; Minimum UID min_uid=100 ; Minimum GID min_gid=100 [handlers] ;Handler for php-scripts application/x-httpd-suphp="php:/usr/bin/php-cgi" ;Handler for CGI-scripts x-suphp-cgi="execute:!self" some vhost in sites-enabled: NameVirtualHost *:8080 <VirtualHost *:8080> ServerAdmin ... ServerName ... ServerAlias ... AddType application/x-httpd-php .php AddHandler application/x-httpd-php .php suPHP_Engine on suPHP_UserGroup user user suPHP_ConfigPath "/home/user/etc" suPHP_PHPPath /usr/bin DocumentRoot /home/user/web/site.com/ ErrorLog /var/log/apache2/site.com-error_log CustomLog /var/log/apache2/site.com-access_log common <Directory /home/user/web/site.com/> Order Deny,Allow Allow from all Options +Indexes </Directory> </VirtualHost> But when I did nano /home/user/web/id.php and paste <?php system('id'); ?> in it, result I get is: uid=33(www-data) gid=33(www-data) groups=33(www-data) Have no idea what to do so I was hoping comunity could help ty.

    Read the article

  • Restoring MBR, partition table, and boot sector of memory card without data loss ("USBC")

    - by Synetech
    Abstract I have a FAT32 memory card that when inserted into a computer causes Windows to prompt to format it. The card is definitely not supposed to be blank and has a bunch of files on it. Symptoms Using a hex-editor/disk-viewer, I examined the card and found that several sectors/clusters have been overwritten with something that has a signature of USBC at the start of the sector. Specifically, the master boot record (and partition table) is gone (hence Windows thinking the card is blank and needing to be formatted), as are the boot sectors (they have the USBC signature and a volume label of NO NAME and partition type of FAT32). Fortunately, it looks like both copies of the FAT are almost entirely intact (a few FAT entries at the start of a cluster here and there seem to be overwritten by USBC). The root directory is also nearly intact—I can see the volume label entry and subdirectory listings, but one sector is overwritten. (There are no more instances of USBC after the last one in the FAT2.) Hypothesis These observations seem to indicate some sort of virus that erases a few key filesystem structures, and then overwrites a few extra sectors here and there. Googling it seems to corroborate the idea of a virus, except that others report a file called USBC which does not apply here, and in fact, could not be possible since there is no filesystem to even see files. I cannot find any information about a virus with these symptoms, nor a removal tool. (I can't help but wonder if it is actually due to an autorun virus prevention tool.) Question I can likely fix the FAT corruption since they are mostly contiguous chains and maybe even the lost sector of the root directory, but does anyone know of a convenient way to restore or (re)create the MBR/partition table and boot sectors (without formatting or overwriting the data)?

    Read the article

  • Solutions on how to use an OS X calendar as a more perfect time tracking solution for 5-10 users in a small agency?

    - by jnthnclrk
    I really like OS X's iCal. Entering events is easy with the mouse and it also gives you a very real visual sense of how long tasks take to complete. We often work remotely in our organisation, so we use a few shared calendars between key individuals to provide us with an overview of hours worked, availability & schedule conflicts without too much disruption to our various, hectic workflows. It really is a neat solution, especially on shared tasks. How many times have you tasked a remote colleague and then lost the thread on whether that task was completed or not? With shared calendars you get a much clearer idea of what your people are working on without having to pick up the phone or compose a chat. However, there are a few areas where this approach fails... iCloud syncing often needs to be re-jiggered The "view only" option on shared calendars does not seem to work, which makes all shared calendars editable by others There is no decent reporting with this workflow There is no task categorisation or tagging Things get very busy in iCal when working with more than 2 shared calendars I've looked at a few task management apps like Basecamp and Harvest, but nothing appears to let me edit my calendar natively and then sync with a 3rd party. Interested in solutions to improve the above workflow and enable us to elegantly increase the amount of users.

    Read the article

  • Do I need to update some of my Debian Squeeze software?

    - by stan31337
    I have installed Debian 6, and LAMP stack from squeeze repository (default). After upgrading Apache 2.2.16 from unstable repository to 2.2.22, thanks to this post - how to upgrade already installed apache2 on debian (lenny) I'm thinking to upgrade all other software packages that I've previously installd from squeeze repository. Should I upgrade them to the ones from unstable repository? Should I upgrade all of them or just selected ones? Here's the list: * arno-iptables-firewall 1.9.2.k-4 >> 2.0.1.c-1 * bind9 1:9.7.3.dfsg-1~squeeze6 >> 1:9.8.1.dfsg.P1-4.2 * php-apc 3.1.3p1-2 >> 3.1.13-1 * fail2ban 0.8.4-3+squeeze1 >> 0.8.6-3 * exim4 4.72-6+squeeze2 >> 4.80-4 * altermime 0.3.10-4 >> 0.3.10-7 * rrdtool 1.4.3-1 >> 1.4.7-2 * vsftpd 2.3.2-3+squeeze2 >> 3.0.0-4 Also I would like to ask how to upgrade 5.3.3 5.3.16, unstable repository has 5.4.x versions only, I don't think I'm ready to move from 5.3 to 5.4 yet. Actually I'm a newbie in Linux, and after Windows experience I have a paranoidal idea to update software to the latest release. I'd be glad for any suggestions and recommendations! Thank you very much!

    Read the article

  • SQL Server 2000, large transaction log, almost empty, performance issue?

    - by Mafu Josh
    For a company that I have been helping troubleshoot their database. In SQL Server 2000, database is about 120 gig. Something caused the transaction log to grow MUCH larger than normal to over 100 gig, some hung transaction that didn't commit or roll back for a few days. That has been resolved and it now stays around 1% full or less, due to its hourly transaction log backups. It IS my understanding that a GROWING transaction log file size can cause performance issues. But what I am a little paranoid about is the size. Although mainly empty, MIGHT it be having a negative effect on performance? But I haven't found any documentation that suggests this is true. I did find this link: http://www.bigresource.com/MS_SQL-Large-Transaction-Log-dramatically-Slows-down-processing-any-idea-why--2ahzP5wK.html but in this post I can't tell if their log was full or empty, and there is not any replies to the post in this link. So I am guessing it is not a problem, anyone know for sure?

    Read the article

  • Outlook 2010 PST not indexing

    - by kellyllek
    I've had this problem for months: Outlook is unable to search recent items in the inbox. I was running Outlook 2010 Beta. I've just moved to the Trial version, hoping to solve the issue. I have tons of PST files but one central one I'm mainly concerned with. As of now it seems none of it is indexing. I've been through all the sites and made all the changes; rebuilt the index, changed the name of the PST files, run scanpst, stopped and started the search services, made sure the Windows features under programs and features has the indexing option checked, etc... Status now says 'zero items left to index', and 150,000 have been indexed. I think I have a lot more files than that, and also nothing is showing up on any search. I'm not sure what else to do? Side question. I'm going to be moving out of Outlook. However I have 10Gigs+ of PST files over the years. I want to merge them and make them search-able in the easiest way possible. Any idea on how to do that? Could I even move over to Thunderbird right now and be able to index and search my PST files? Also, Google Desktop won't index Outlook 2010 email either...

    Read the article

  • PCI configuration method error (Linux Kernel)

    - by user326580
    (I'm not sure if here is the best place for that question, so I will be pleased if anyone suggests me a more proper forum for that.) I'm trying to install Ubuntu 12.04.4 in a netbook (from an usb), but the kernel stops very early in initialization process. After two days of research, I've found that it boots with the parameter pci=conf2 but not with the default conf1 method. Nevertheless, after kernel boot, it seems that Ubuntu can't find usb devices and I'm not be able to install it. Trying with Debian, its a graphic installer and I found that the mouse isn't working neither.I think pci devices are not working. I tried about 50% of kernel pci boot options in the kernel-parameters file (in conjunction with the implicit default conf1) without luck. Any suggestions? PS: The problem is the same with kernel 2.6 or 3. (In Spanish) No estoy seguro si éste es el mejor lugar para esta pregunta, por lo cual estaré encantado si alguno me sugiere un mejor lugar para ella. Estoy intentando instalar Ubuntu 12.04.4 en una netbook (desde un usb), pero el kernel se detiene muy temprano en la inicialización. Después de dos días de investigar, encontré que arranca con el parámetro pci=conf2 pero no con método default conf1. Sin embargo después de que el kernel arranca, parece que Ubuntu no logra encontrar los dispositivos usb y no puedo instalar el sistema. Intentando con Debian y su instalador gráfico, encontré que el ratón tampoco funcionaba, así que pienso que los dispositivos pci no están funcionando. Intenté con aproximadamente el 50% de las opciones de arranque del kernel para pci (en conjunción con el método implícito conf1) sin suerte. Alguna idea? PS: El problema es el mismo con el kernel 2.6 o 3.

    Read the article

  • Dns - wildcard vs. cname subdomains

    - by Matthew
    Alright I have to admit I'm confused with how DNS works. I've always just added things until they worked, and now it's time to learn how they work. So one confusing thing to me is that there's sort of two places I can have records. I have an account with rackspace cloud servers. And then there's the place I registered the domain. But both allow me to edit DNS records. Should I do everything at both places or is one better than the other or am I missing the point? Subdomains confuse me too. I'd like to be able to just have a wildcard subdomain (I've done this in the past.) I just don't like the idea of adding a cname record or A record every time I need a new subdomain. Then I read this and it says: The exact rules for when a wild card will match are specified in RFC 1034, but the rules are neither intuitive nor clearly specified. This has resulted in incompatible implementations and unexpected results when they are used.

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >