Search Results

Search found 40229 results on 1610 pages for 'deleted files'.

Page 741/1610 | < Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >

  • Windows 7 - mysteriously missing free HDD space

    - by sYnfo
    I have Windows7 installed on 50GB (Oops, it should have been 45GB, sorry) partition, and every now and then it gets full, and I have to resize that partition. I always thought it is quite normal. But it happened again today and this time, I'm sure it is not normal, because since last resizing (35GB 45GB) I did not install any new apps or whatever. Also, sum of sizes off all, including hidden & system, root folders and files is ~18GB, yet windows is indicating that all 50GB are used up... Any idea what is going on? EDIT: Great tools everyone! (SourceForge appears to be offline at the moment, I'll check WinDirStat later) Alas, non of them solved my problem just yet... Screenshot from SpaceSniffer: On the right there is some kind of "Unknows Space", any idea what that could be? EDIT2: After those two apps failing to help much I didn't expect it, but WinDirStat actually helped. It showed that those missing 27GB are in my Temp folder (Well, that should have been my first guess anyway). There I found hundreds of ~100MB files, named like HTT????.tmp. After some googling it appears to be a problem with ESET NOD32 antivirus and it's ThreatSense feature. Thank you all for help! :)

    Read the article

  • What is the fastest and best way to convert an rmvb video to mp4/mkv without losing any quality?

    - by Eric Leung
    the file will be played in a popbox3d. my old method was to convert the video using vidcoder (an offshoot of handbrake) using normal settings, but i've recently confirmed that this significantly reduces video and audio quality. i bumped up the conversion quality to 'high profile' and this produced a higher quality video but raised the conversion time to about twice the video length (95 minutes to convert a 45 minute video) on a core2duo laptop. this is less than ideal when a large number of videos need to be converted. i have tried a direct remuxing using mkv toolnix but this produced a video that refused to display video on the popbox3d, which is consistent with the reported: [quote=other old thread] it is possible to put RealMedia A/V in MKV container (used MKVtoolnix) - however, it is awkward to play later. RV40 is only suspected to be based on H.264 - simplify, is not consistent with MPEG-4 AVC specification. [/quote] i have read that ... [quote=from old thread] Under normal circumstances, [ffmpeg] should convert the video to .video.mp4 and the audio to (.wav then to) .audio.mp4, then mux the video and audio into a new .mp4 file and delete the temporary video-only and audio-only files.[/quote] and i am currently attempting to discover how this is done. help? PS: i download a lot of series from asia and for some strange reason, rmvb is a really popular format over there. sometimes, it's the only format that's available. unfortunately, it's a format that is incompatible with the popbox3d, so i have to convert the files before i can watch them on my tv.

    Read the article

  • SSD cache to minimize HDD spin-up time?

    - by sirprize
    short version first: I'm looking for Linux compatible software which is able to transparently cache HDD writes using an SSD. However, I only want to spin up the HDD once or twice a day (to write the cached data to the HDD). The rest of the time, the HDD should not be spinning due to noise concerns. Now the longer version: I have built a completely silent computer running Xubuntu. It has a A10-6700T APU, huge fanless cooler, fanless PSU, SSD. The problem is: it also has (and needs) a noisy HDD and I want to forbid spinning it up during the night. All writes should be cached on the SSD, reads are not needed in the night. Throughout every day, this computer will automatically download about 5 GB of data which will be retained for about a year, giving a total needed disk capacity of slightly less than 2 TB. This data is currently stored on a 3 TB noisy hard disk drive which is spinning day and night. Sometimes, I'll need to access some data from several months ago. However, most times I'll only need data from the last 14 days, which would fit on the SSD. Ideally, I'd like a transparent solution (all data on one filesystem) which caches all writes to the SSD, writing to the HDD only once a day. Reads would be served by the cache if they were still on the SDD, else the HDD would have to spin up. I have tried bcache without much success (using cache_mode=writeback, writeback_running=0, writeback_delay=86400, sequential_cutoff=0, congested_write_threshold_us=0 - anything missing?) and I read about ZFS ZIL/L2ARC but I'm not sure I can achieve my goal with ZFS. Any pointers? If all else fails, I will simply use some scripts to automatically copy files over to the big drive while deleting the oldest files from the SSD.

    Read the article

  • Hylafax: Encounter "No font metric information" when try to send a fax

    - by Chau Chee Yang
    I am using Hylafax 6.0.5 on Fedora 13 x86_64. As there are no rpm package available for Fedora 13, I use the source tar ball to install hylafax myself. Everything seems fine during compile and install. I try to send a fax with sendfax and encounter error: # sendfax -n -d <fax-number> /etc/passwd /usr/local/sbin/textfmt: No font metric information found for "Courier-Bold". Usage: /usr/local/sbin/textfmt [-1] [-2] [-B] [-c] [-D] [-f fontname] [-F fontdir(s)] [-m N] [-o #] [-p #] [-r] [-U] [-Ml=#,r=#,t=#,b=#] [-V #] files... >out.ps Default options: -f Courier -1 -p 11bp -o 0 Error converting document; command was "/usr/local/sbin/textfmt -B -f Courier-Bold -Ml=0.4in -p 11 -s default >'/tmp//sndfaxp5GdJ9' <'/etc/passwd'" It seems like there is problem with font problem. I have ghostscript-fonts installed too. I can't find hyla.conf in path /etc/hylafax. There is no /etc/hylafax path in my file system. All configuration files seems located in /var/spool/hylafax/etc. Please advice. Thank you.

    Read the article

  • What is Best storage servers infrastructure ? DAS/NAS/SAN or installing GlusterFS/LUSTER/HDFS/RBDB

    - by TORr0t
    I am trying to design an infrastucture for the project I am working on. It would be somehow a file-sharing/downloading project (like rapidshare) and I would need high storage sizes and good scability, and I would add new storage nodes after my project grows up. I have come up with 3 solutions for my project which are using Luster, GlusterFS, HDFS, RDBD. For start, i would have 2 servers, one server is for glusterfs client + webserver + db server+ a streaming server, and the other server is gluster storage node. (After sometime, i would be adding more node servers, and client servers (dont know how many new client new servers to add, will see later) So, i am thinking to work with glusterfs. But i really wonder that if i have to use high performance servers with high sotrage sizes or avarage/slow servers with high storage sizes? Or nas/das/san solutions are better for glusterfs storage nodes? I might buy a nas and install glusterfs onto it. I would be happy to listen to your recommendations for the server properties (for each clients and nodes) . I really dont know if I really need high amount of ram and good cpus to for the nodes. I am sure i need it for client servers. The files would be streamed as well, so the Automatic file replication is important, thus, my system should work like a cloud, when needed, according to high traffic, the storage nodes should copy the most demanded file to be streamed and would help me to get rid of scability problems and my visitors would able to stream/download those files. Also, i am open to your experiences/thoughts about any good solution. Luster, hdfs, rbdb are the other options and i would be happy to listen to your thoughts here. I would be very very happy to hear back from anyone commented of any words I have used here. Thanks

    Read the article

  • Unable to install mod_wsgi on CentOS 5.5 VPS...

    - by jasonaburton
    I am trying to install mod_wsgi on my VPS, but it won't work. This is what I am doing: wget http://modwsgi.googlecode.com/files/mod_wsgi-2.5.tar.gz tar xzvf mod_wsgi-2.5.tar.gz cd mod_wsgi-2.5 ./configure --with-python=/opt/python2.5/bin/python After I run the above command, I get this error: checking for apxs2... no checking for apxs... no checking Apache version... ./configure: line 1298: apxs: command not found ./configure: line 1298: apxs: command not found ./configure: line 1299: /: is a directory ./configure: line 1461: apxs: command not found configure: creating ./config.status config.status: creating Makefile config.status: error: cannot find input file: Makefile.in Through some research I've discovered that I need to modify my command: ./configure --with-apxs=/usr/local/apache/bin/apxs \ --with-python=/usr/local/bin/python But, /usr/local/apache/ doesn't exist, or so that's what it is telling me. If it doesn't exist, how do I create it with all the files needed, or if apache is located elsewhere on my VPS where would it be located? I'd also like to mention that I ran a command to install apache before this entire deal: yum install httpd so I assumed that was all I needed but apparently not (I am very new at all this server administration stuff so please be gentle) EDIT: This is the tutorial that I have been using to get this all set up: http://binarysushi.com/blog/2009/aug/19/CentOS-5-3-python-2-5-virtualevn-mod-wsgi-and-mod-rpaf/ I got stuck at the heading "Installing mod_wsgi" Thanks for any help!

    Read the article

  • Limiting access in Silverlight\Pivotviewer

    - by sparaflAsh
    I'm going to deploy a pivotviewer application. As some of you might know this silverlight application load a .cxml index file for a group of images. My need is to make .cxml file and image files not accessible for the user. Now, if I don't have a need I usually code like this in C# and the file is hosted in the documentroot: _cxml = new CxmlCollectionSource(new Uri("http://www.myurl.it/Collection.cxml", UriKind.Absolute)); This means that my cxml and then the images are available by http for everyone who knows the URI. I'm a newbie to server configuration, so any help/hint would be deeply appreciated. Someone suggested me to take the files out of the root, but it seems like I can't go to pick them up if they are not a URL in Silverlight. At least I didn't managed to understand how. Someone else suggested me to play with web.config file to hide URLs, but I don't really know where to start. My question is, what's the best practice to hide my stuff? Obviously I can edit the question if you need more details.

    Read the article

  • Winamp question: Generating 'dynamic playlists' from file playlists -OR- mass-tagging by file playli

    - by Daddy Warbox
    I'm trying to think of a way to do this. I sort my songs into a variety of playlists corresponding to different 'moods' I might have as I listen to them, and some songs fit for more than one kind of mood (e.g. a jazz song might be 'stylish' and 'emotional', or something to that effect). I also give them star ratings for a general sort of opinion about them. I want to be able to filter and sort my media library by the moods I want or don't want, as well as by star rating. Anyone have a good way to do something like this? I can't seem to use Winamp's dynamic playlists to generate lists from existing filesystem playlists (e.g. songs in a given .m3u files). Hand-tagging files with Winamp's tag editor is a royal pain. It's trouble enough just giving a star rating and sorting into playlists as is. If there is there a way to mass tag songs within each playlist with mood words to allow me to create dynamic playlists, I'd be fine (for now). It'd be nice if I could do this via some kind of hotkey for each song, too. I'm looking to see if I can use a macro program or something to do that, though. Thanks in advance. P.S: Alternatively, would something like Foobar have functions like this? Note: Italics are recent edits.

    Read the article

  • Cause of slow download speed on a particular EC2 instance?

    - by James
    I have a networking issue I'm trying to solve. I have two EC2 instances, same zone, same type. On one of the two EC2 instances (the 'bad' instance), the download speed is really poor (200k/s), while on the other (the 'good' instance), the download speed is fine, comfortable at 30M/s +). To clarify, I'm talking about downloading files to the EC2 instance while ssh'd into the server, e.g running wget with a large file. I've tried different files, including S3 objects and a large linux ISO from elsewhere. Running ethtool eth0 only returns 'Link detected: yes' for both. When running ifconfig, both return the same for most part, aside from how the good instance shows no error packets yet the bad instance shows many: UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:168372370 errors:5075643 dropped:0 overruns:0 frame:0 TX packets:122116480 errors:0 dropped:0 overruns:0 carrier:0 Both servers are configured the same, at least were supposed to be. How can I go about diagnosing the cause for the slow download speed? Is there anything particular to EC2 instances that could cause this? Having trouble knowing where to start. Thanks for any help!

    Read the article

  • Windows 8, IIS8 how to make PHP imagick work

    - by Laci K
    I'm new to IIS server before IIS for 6 years I used Apache 2.X and with Apache imageMagick and its PHP module imagick worked just fine even with x64 version of PHP and Apache 2.4 and imageMagick. I tried to make imagick to work with IIS8 but it wont work. I always get the typical PHP startup warning in my log PHP Startup: Unable to load dynamic library 'C:\Program Files (x86)\iis express\PHP\v5.4\ext\php_imagick.dll' - %1 is not a valid Win32 application. in Unknown on line 0 And the next thing why is IIS loading php from IIS express folder if I have php in the program files? But actually I dont care until it works :) So what I did so far I unistalled imageMagick 6.7.X 64bit version and installed the latest x86 version, tested it in command line and it worked, than I looked up on net the latest imagick DLL which was I think the 3.1.0RC2 (founded here http://www.peewit.fr/imagick/) than I copied the dll to PHP's ext folder than edited php.ini and added the imagick to the dynamic extensions after that I restarted IIS and than nothing :( I got the error which I wrote earlier. Today I installed PEAR package Installer because I read somewhere that someone made it work with it but he also mentioned, that he needed to comply wincache too. Isn't there any easier solutions to make it work? Could someone maybe write me a step by step guide how to make this work.

    Read the article

  • 2nd instance of mysql closes/doesnt start no warnings/errors?

    - by acidzombie24
    I have an external HD and i'd like to run a 2nd mysql instance on it. I used the windows installer to install/configure mysqld as a service on windows7. I took the my.ini from C:\Program Files\MySQL\MySQL Server 5.5\my.ini Then edited the port (client and mysqld), datadir and innodb_data_home_dir. After running this command "C:\Program Files\MySQL\MySQL Server 5.5\bin\mysqld" --defaults-file="f:/dev/my.ini" I found an error which was all about the innodb_data_home_dir directory not existing. After that I ran the command again. Mysqld simply starts up for a second then immediately closes. I see no message in my command prompt. I know this command line args are correct as i see the mysqld service using the same one except a different my.ini path. Also it did tell me about the directory not existing so i know it is reading the new ini file. How do i figure out why this 2nd instance of mysqld is closing? How do i get 2 instance running? I'm using v 5.5

    Read the article

  • How to set JS source directory in apache2?

    - by highBandWidth
    I am trying to run a very basic webserver for development/debugging. The static HTML seems to be delivered correctly, but it seems that the JavaScript libraries are not being delivered to the browser. The page HTML says something like <html> <head> <script type='text/javascript' src="/lib/json.js"></script> ... Now, I have set up a link for /lib/ in my httpd.conf as: Scriptalias /lib/ "/SomeFolder/lib/" When I do this, it can't fetch the files because this is what I see in my apache error log: ... [error] [client ::1] client denied by server configuration: /SomeFolder/lib/json.js, referer: http://localhost/SomeSite It seems that apache is not allowing access to the folder, so I add this to httpd.conf: Directory "/SomeFolder/lib/"> Allow from all </Directory> After this, browsing the page still does not run the JS, instead I see the following error in my apache error log: [error] [client ::1] (13)Permission denied: exec of '/SomeFolder/lib/json.js' failed, referer: http://localhost/SomeSite So now, it seems that apache is trying to run the JS files on the server like a cgi script or something. But I have not made that folder a cgi-bin folder. The only lines where SomeFolder is mentioned by name is in these lines in httpd.conf: Scriptalias /lib/ "/SomeFolder/lib/" Directory "/SomeFolder/lib/"> Allow from all </Directory>

    Read the article

  • SQL Server 2008 R2 - Cannot create database snapshot

    - by Chris Diver
    Server: Windows Server 2008 R2 X64 Enterprise SQL: SQL Server 2008 R2 Enterprise X64 I have a default SQL Server instance, the SQL Server service account is running as a domain user. I am trying to create a database snapshot in the directory where the mdf files are stored. The T-SQL syntax is correct. The file system is NTFS. The error message I get is: Msg 1823, Level 16, State 2, Line 1 A database snapshot cannot be created because it failed to start. Msg 5119, Level 16, State 1, Line 1 Cannot make the file "e:\MSSQL10_50.MSSQLSERVER\MSSQL\DATA\TestDB.ss" a sparse file. Make sure the file system supports sparse files. The local SQLServerMSSQLUser$db$MSSQLSERVER group has Full Control permission on the folder where I am trying to create the snapshot. I can fix the problem in two ways, neither of which are suitable. Add the SQL Server service (domain) account to the local Administrators group and restart the SQL service. Grant the local SQLServerMSSQLUser$db$MSSQLSERVER group Full control on E:\ I have tried to change the owner of the DATA directory to SQLServerMSSQLUser$db$MSSQLSERVER to no avail. I have no issue creating a new database Why can I not create a snapshot by giving permission only on the DATA folder? Update 23/09/2010: I have tried mrdenny's suggestion with no luck (but learned something new in the process), I suspect the problem may be due to the fact that the domain is a windows 2000 domain running in mixed mode. I had to install hotfix KB976494 for Server 2008 R2, as the SQL Server 2008 R2 installer would not verify the service account correctly with the domain. I noticed that Server 2000 isn't a supported operating system for SQL 2008 R2 but cannot find anything that would suggest it shouldn't work in a 2000 domain. I dis-joined the test server from the domain and changed the service accounts to the local service account and I still have the same issue. I will try to re-install the server without joining the domain and without the hotfix and see if the issue persists.

    Read the article

  • Trouble installing SSL Certificate on Apache

    - by jahufar
    We have a dedicated server with GoDaddy running Plesk that requires SSL. I've generated the certificate files and I created a vhost_ssl.conf (since I can't edit the default plesk apache configuration http.include, vhost_ssl.conf gets Included to httpd.include) that tells apache where to find the certificate files: SSLCertificateFile /usr/local/psa/var/certificates/domain.com.crt SSLCertificateKeyFile /usr/local/psa/var/certificates/domain.com.key SSLCertificateChainFile /usr/local/psa/var/certificates/sub.class1.server.ca.pem When I stop/start apache, it refuses to start up. The error_log does not have anything on it either (which is strange). Then I opened up httpd.include and found this bit: <VirtualHost 208.xxx.xxx.xxx:443> ServerName domain.com:443 ServerAlias www.domain.com UseCanonicalName Off SSLEngine on SSLVerifyClient none SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 Include /var/www/vhosts/domain.com/conf/vhost_ssl.conf Then I commented out SSLCertificateFile /usr/local/psa/var/certificates/certagC9054 (which is plesk's SSL certificate) and restarted apache and it worked perfectly fine. It seems that Apache does not like multiple SSLCertificateFile within the same VirtualHost directive? As anyone who worked with plesk knows, I can't just remove SSLCertificateFile directive in httpd.include as plesk will overwrite my changes when someone uses it - which is why it's in vhost_ssl.conf. So I'm stuck and this is beyond my meager admin skills. Would appreciate someone who knows what (s)he's doing to tell me whats going on. Thanks in advance.

    Read the article

  • Exchange 2010: Find Move Request Log after move request completes

    - by gravyface
    EDIT: significantly changed my question here to streamline it a bit. I've gone ahead and used 100 as my corrupted item count and ran it from the Exchange Shell. So the trail of tears continues with my SBS 2003 to 2011 migration: all the mailboxes have moved mailbox store from OLDSERVER to NEWSERVER, with the Local Move Requests completing successfully, except for one. What I'd like to do now is review the previous move request log files: when they were in progress, I could right-click Properties Log View Log File, but now that they're completed, that's not available. Nor can I use: Get-MoveRequestStatistics <user> -includereport | fl MoveReport ...as the move request has now completed and it errors out with "couldn't find a move request that corresponds...". Basically what I'd like to do is present the list of baditems to the user so that they're aware of what items didn't come across and if anything important was lost, be able to check their current OST, an archive.pst, etc. to recover it if possible. If this all needs to be wrapped up in a batch Exchange power shell command to pipe the output to log files on disk somewhere, I'm all ears, and would appreciate it for the next migration we do.

    Read the article

  • limits.conf to set memory limits

    - by Rupert Jipe
    I would like to limit any process from using more than 500 MB of RAM. AFAIK this is done using RSS in /etc/security/limits.conf but the process called gnome-panel apparently is using 618436 kB of VmRSS. How can this be ? /etc/security/limits.conf * hard rss 512000 username@debian:~$ cat /proc/3002/status Name: gnome-panel State: S (sleeping) Tgid: 3002 Pid: 3002 PPid: 2910 TracerPid: 0 Uid: 1000 1000 1000 1000 Gid: 1000 1000 1000 1000 FDSize: 64 Groups: 20 24 25 29 44 46 112 116 117 1000 1002 1003 VmPeak: 916636 kB VmSize: 916636 kB VmLck: 0 kB VmHWM: 618436 kB VmRSS: 618436 kB VmData: 601972 kB VmStk: 104 kB VmExe: 516 kB VmLib: 29232 kB VmPTE: 1760 kB Threads: 1 SigQ: 0/14001 SigPnd: 0000000000000000 ShdPnd: 0000000000000000 SigBlk: 0000000000000000 SigIgn: 0000000020001000 SigCgt: 0000000180000000 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: ffffffffffffffff Cpus_allowed: 3 Cpus_allowed_list: 0-1 Mems_allowed: 00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 871965 nonvoluntary_ctxt_switches: 47553 PaX: PeMRs username@debian:~$ cat /proc/3002/limits Limit Soft Limit Hard Limit Units Max cpu time unlimited unlimited seconds Max file size unlimited unlimited bytes Max data size unlimited unlimited bytes Max stack size 8388608 unlimited bytes Max core file size 0 0 bytes Max resident set 524288000 524288000 bytes Max processes 100 100 processes Max open files 1024 1024 files Max locked memory 65536 65536 bytes Max address space unlimited unlimited bytes Max file locks unlimited unlimited locks Max pending signals 14001 14001 signals Max msgqueue size 819200 819200 bytes Max nice priority 0 0 Max realtime priority 0 0 Max realtime timeout unlimited unlimited us

    Read the article

  • CentOS tftp server is broken

    - by Mike Pennington
    I'm trying to run tftpd from xinetd on CentOS 6; however, I can only tftp from localhost. I have a file in /opt/tftpboot/fw.test.conf that I can retrieve if I tftp to localhost: [mpenning@localhost ~]$ tftp localhost tftp> get fw.test.conf tftp> quit [mpenning@localhost ~]$ ls fw.test.conf [mpenning@localhost ~]$ However, I cannot receive this file if I tftp to eth1 on this server (the address on eth1 is 172.16.1.4). [mpenning@localhost ~]$ sudo tshark -i eth1 udp and host 172.16.1.5 Running as user "root" and group "root". This could be dangerous. Capturing on eth1 0.000000 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 5.000133 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 10.000184 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 15.000297 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 20.000331 172.16.1.5 -> 172.16.1.4 TFTP Read Request, File: fw.test.conf\000, Transfer type: netascii\000 ^C5 packets captured [mpenning@localhost ~]$ I have the following xinetd configuration: [root@localhost mpenning]# cat /etc/xinetd.d/tftp # default: off # description: The tftp server serves files using the trivial file transfer \ # protocol. The tftp protocol is often used to boot diskless \ # workstations, download configuration files to network-aware printers, \ # and to start the installation process for some operating systems. service tftp { socket_type = dgram protocol = udp wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /opt/tftpboot disable = no per_source = 11 cps = 100 2 flags = IPv4 } [root@localhost mpenning]#

    Read the article

  • Mac Mini drive problems but SMART verified: bad hard drive or controller?

    - by Zac Thompson
    I have a 3-year-old Intel Mac Mini at home. About a month ago, it stopped booting from the hard drive (internal, SATA, 80GB). I tried booting from the Install Disc to repair the filesystem but Disk Utility was unable to do so ("invalid node structure"). I was also unable to use the hard drive in the Terminal from the Install Disc nor from an Ubuntu boot CD ("DRDY err"). I could see the contents of some directories, but others would give an error and I would get failures when trying to copy files. At this point I was sure the filesystem was hosed and I'd want to reformat at least. DiskWarrior was able to let me retrieve the data files I was interested in, which are now copied to an external hard drive, but it reported a high number of problems ("speed reduced by disk malfunction" count was over 2000) when in the process of trying to rebuild the directory for the drive. It also would not let me use the rebuilt directory to replace the one on the drive; it claimed the disk errors prevented recovery in this way. Under normal circumstances I would now assume that the drive itself was going bad: DiskWarrior's "disk malfunction" error above is supposed to imply hardware problems. My initial plan was to buy a replacement for the internal 2.5" drive. However: Disk Utility, command-line tools and DiskWarrior had reported all along that the SMART status of the drive was okay/Verified. So I'm now worried that the drive hardware is actually fine, and that the problems were due to a disk controller that has gone "bad" somehow. If this is the case, I'll probably just replace the whole computer. Any advice on how I can tell what is to blame? I don't have a lot of extra hardware sitting around, so I don't have the option of simply dropping the drive in another machine or popping another hard drive inside the Mini.

    Read the article

  • Do registry issues with Win7 persist through a recovery from a system image?

    - by user59089
    So I need a bit of advice, please; here's my situation: I have 1) a system image on an a brand new external 1 TB SATA drive, that I managed to successfully capture before my 2) primary system drive went down. I realize this is a fairly simple matter of buying a new primary drive and performing the recovery to the fresh disk...however, the issue is that I believe Win7 was also having some significant issues of its own--basically, Update unable to install updates, and Backup continually ditching the auto backup schedule. I'd been trying to address those issues when my system was still working, but it's been so fruitless, I'm convinced a Win7 re-install would be best, and now I'm concerned that if I was in fact having what I believe are likely registry-related issues before, that these will persist through a recovery--would that likely be correct? I'm mainly worried about recovering my files, so if I did a full recovery from the image, should I be able to then access my individual files, and copy them manually to an external drive, so I can then do a full re-install of Win7? Sory if this seems obvious, but I've never done a recovery before and just trying to make sure there's no red flags with what I have in mind...

    Read the article

  • Dual boot windows 8 pro and windows 7 on XPS 8500 Special Edition

    - by Jesse
    I am trying to install a dual boot with windows 7 premium and windows 8 Pro on an XPS 8500 special edition. I created a new primary partition on my C: drive, inserted the windows 8 install disk, and rebooted my computer from DVD. I select custom install and the dialog box saying "Where do you want to install windows at?" pops up but none of my drives are listed. Please help me determine what is going on. I don't understand why none of my drives are showing up on this menu. Not even the original drive. When I go to load driver and click on the partition I created it tells me "No signed device drivers were found. Make sure the installation media contains the correct drivers, and then click OK." resolved above issue by running setup from the source folder on the install disk instead of booting from DVD. Was able to locate my new partition and start install. It completes the first step of "Copying windows files" just fine but then on the next step "Getting files ready for installation" my computer restarts and attempts to load windows 8 but keeps telling me my pc needs to restart. This keeps going on in an infinite boot loop. Please help, this has been a nightmare!

    Read the article

  • One server running Django (with Nginx and Apache) and Wordpress Blog

    - by JCWong
    I have nginx listening to port 80 for my primary site foo.com. It proxys to port 8080 which is where the Django app lives server { listen 80; server_name www.foo.com foo.com; access_log /home/jeffrey/www/ddt/logs/nginx_access.log; error_log /home/jeffrey/www/ddt/logs/nginx_error.log; location / { proxy_pass http://127.0.0.1:8080; include /etc/nginx/proxy.conf; } location /media/ { root /home/jeffrey/www/ddt/; } location /static/ { root /home/jeffrey/www/ddt/; } location /public/ { root /home/jeffrey/www/ddt/; } } I'd like to have a wordpress blog run on the same server. Apache is listening to port 8080 with this http.conf file NameVirtualHost *:8080 WSGIScriptAlias / /home/jeffrey/www/ddt/apache/ddt.wsgi WSGIPythonPath /home/jeffrey/www/ddt <Directory /home/jeffrey/www/ddt/apache/> <Files ddt.wsgi> Order deny,allow Allow from all </Files> </Directory> I added my Wordpress site using a virtualhost <VirtualHost *:8080> ServerName www.bar.com ServerAlias bar.com DocumentRoot /home/jeffrey/www/jeffrey_wp </VirtualHost> When I go to bar.com I still see my django app. Is it possible for these two sites to run on the same server?

    Read the article

  • permanent NAS-mount in Ubuntu - wrong fs type, bad option, bad superblock

    - by Emil
    My network drive shows up in the file browser, just like my external usb-harddrive. Moving, running and editing files works. Hovering over it shows smb://lacie-2big/nasdisk . BUT, when I want to save a file, the drive doesn't come up as an option. All I can see is my other places, including my usb-harddrive. I am a complete newbie but I am GUESSING that it has something to do with the mount not being a "real" mount but just a shortcut to the smb location. So I ran the tutorial at https://wiki.ubuntu.com/MountWindowsSharesPermanently about how to "mount a network drive permanently". edited my fstab to //LaCie-2big/nasdisk /media/nasmount cifs guest,uid=1000,iocharset=utf8,codepage=unicode,unicode 0 0 and running sudo mount -a gave me the following error: mount: wrong fs type, bad option, bad superblock on //LaCie-2big/nasdisk, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount. helper program) In some cases useful info is found in syslog - try dmesg | tail or so Now thats a very helpful error message, BUT, before I go any further, I'd be really thankful if one of you could tell me if I'm even in the right ballpark, or if my actual need: to be able to download files (ie torrents) directly to the drive, can be possible as it is already. Question: How to fix "wrong fs type, bad option, bad superblock on //LaCie-2big/nasdisk, missing codepage or helper program" when running mount -a

    Read the article

  • apache authentication

    - by veilig
    I'm trying to set up a local webserver on my network. I want to be able to be able to access the webserver from any machine inside my network w/out authenticating. and two extra domains need access to it w/out authenticating. Everyone else I would like to authenticate in. so far, I can get to it from inside my network. and the two extra domains can access my webserver, but everyone else is just hanging. They don't get an authentication or anything. can anyone tell me what I'm doing wrong here? This is part of my apache's site-available file so far: <Directory /path/to/server/> Options Indexes FollowSymLinks -Multiviews Order Deny,Allow Deny from All Allow from 192.168 Allow from localhost Allow from domain1 Allow from domain2 AuthType Basic AuthName "my authentication" AuthUserFile /path/to/file Require valid-user Satisfy Any AllowOverride All <Files .htaccess> Order Allow,Deny Allow from All </Files> </Directory>

    Read the article

  • Bootable GRUB partition

    - by MA1
    I have a customized live fedora 12 USB which is working fine. What i want to do is to make a partition of my hard disk bootable so that my customized fedora can be run from hard disk. To accomplish this i did the following steps: Created a primary partition(/dev/sda2) and format it as ext3 and set it as active. Copy all the files in the live usb to /dev/sda2. Following are the live usb contents(all directories): a. boot b. EFI c. LiveOS d. syslinux Then i installed the GRUB in boot/grub Created the grub.conf in boot/grub Following are the contents of each directory in the USB: syslinux/ boot.cat isolinux.bin splash.jpg vesamenu.c32 initrd0.img ldlinux.sys syslinux.cfg vmlinuz0 LiveOS/ livecd-iso-to-disk osmin.img squashfs.img EFI/ boot/ boot.conf grub.conf boot.efi bootia32.conf bootia32.efi splash.jpg splash.xpm.gz vesamenu.c32 initrd0.img isolinux.bin isolinux.cfg vmlinuz0 boot/grub/ core GRUB files grub.conf olpc.fth Following are contents of grub.conf default=0 splashimage=/EFI/boot/splash.xpm.gz timeout 2 hiddenmenu title funLinux kernel /EFI/boot/vmlinuz0 root=live:LABEL=myFun rootfstype=auto ro liveimg quiet ssb.blacklist=1 selinux=0 vga=normal nomodeset rhgb initrd /EFI/boot/initrd0.img Now when i try to boot from the hard disk it shows the grub menu and fedora starting to load but during loading it said No root device found Boot has failed, sleeping forever So, where is the problem? what i am doing wrong?

    Read the article

  • Samba PDC share slow with LDAP backend

    - by hmart
    The scenario I have a SUSE SLES 11.1 SP1 machine as Samba master PDC with LDAP backend. In one share there are Database files for a Client-Server application. I log XP and Windows 7 machines to the local domain (example.local), the login is a little slow but works. In the client computers have an executable which opens, reads and writes the database files from the server share. The Problem When running Samba with LDAP password backend the client application runs VERY SLOW with a maximum transfer rate of 2500 MBit per second. If disable LDAP the client app speed increases 20x, with transfer rate of 50Mbit/sec and running smoothly. I'm doing test with just two users and two machines, so concurrency, or LDAP size shouldn't be the problem here. The suspect LDAP, Smb.conf [global] section configuration. The Question What can I do? I've googled a lot, but still have no answer. Slow smb.conf WITH LDAP [global] workgroup = zmartsoft.local passdb backend = ldapsam:ldap://127.0.0.1 printing = cups printcap name = cups printcap cache time = 750 cups options = raw map to guest = Bad User logon path = \\%L\profiles\.msprofile logon home = \\%L\%U\.9xprofile logon drive = P: usershare allow guests = Yes add machine script = /usr/sbin/useradd -c Machine -d /var/lib/nobody -s /bin/false %m$ domain logons = Yes domain master = Yes local master = Yes netbios name = server os level = 65 preferred master = Yes security = user wins support = Yes idmap backend = ldap:ldap://127.0.0.1 ldap admin dn = cn=Administrator,dc=zmartsoft,dc=local ldap group suffix = ou=Groups ldap idmap suffix = ou=Idmap ldap machine suffix = ou=Machines ldap passwd sync = Yes ldap ssl = Off ldap suffix = dc=zmartsoft,dc=local ldap user suffix = ou=Users

    Read the article

< Previous Page | 737 738 739 740 741 742 743 744 745 746 747 748  | Next Page >