Search Results

Search found 11915 results on 477 pages for 'copy'.

Page 355/477 | < Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >

  • All browsers refusing to load a specific image on a webpage?

    - by Johnson
    Out of nowhere today, all 3 of my browsers (FF/Chrome/IE, OS = Win7 x64) are refusing to load the homepage of interfacelift.com correctly. It works fine on other PC's in the house (on the same network), so it is definitely related to this one PC. The browser won't load the main image on the page correctly (even though the source code looks good), however if I direct the browser to the exact location of that image, then it displays fine. So obviously I can get the HTML index (which locates the resource) and I can get to the resource. So why heck isn't it displaying properly on the index page? It's almost as if the HTML rendering engine has gone bad, on all 3 browsers at once. I've browsed to a bunch of other sites (including sites very heavy on JS, with HTML much more complex than the one in question here) and am seeing nothing funny. Only thing wonky I've done with my PC in the past several hours was replacing the system file Magnifier.exe with a copy of cmd.exe while playing around with some of the ideas mentioned in this guide. However, I've since then restored the files to their previous state, and I don't know how Magnifier would be related to this even if I hadn't restored it. Any ideas? I'm stumped! EDIT: Here is what the broken page looks like in Chrome. And here is the image loaded correctly by itself.

    Read the article

  • Macbook Pro Triple Boot OS X Lion, Windows 7 and Windows 8

    - by Lloyd Sparkes
    MacBook Pro (Summer 2010 Model, Basic Model) I currently have OS X Lion and Windows 7 running side by side on my MacBook Pro. However I have a need to get Windows 8 running as well in this mix (a Virtual Machine is not good enough, I need the performance). I have created a suitably sized parition (80GB) that is recognizable in Boot Camp. However every time I try to boot from the USB stick (that worked to install Windows 8 on my PC) using the latest version of rEFIt, it just boots Windows 7 and not the Windows 8 installer. I cannot start the installation within Windows 7 as it will just install over Windows 7. I'm guessing the Boot Camp emulation is doing something werid to stop the "Press any key to install Windows..." message from appearing (which should happen if the installer detects Windows is already installed (e.g. if you left your install disk in). Is there a way to get around this / force the installer to start? (Note I cannot start the Windows 7 installer either if I wanted to install a second copy of Windows 7 to upgrade to Windows 8)

    Read the article

  • How to repair a damage transaction log file for Exchange 2003

    - by Markus Larsson
    Hi! Yesterday we had a power failure and the UPS did not work (it has worked perfect before). Everything seem to be ok when I started all the servers again except of the mail, when I try to mount the store I get the following message: “The database files in this store are corrupted” Server: Exchange 2003 running on a Small Business Server Latest full backup: one week old Backup program: Backup Exec 9.0 This is what I have done: 1. Copy every file in the MDBDATA folder (edb, stm, log) 2. Run Eseutil /d for priv1.edb 3. Run Eseutil /p for priv1.edb (took seven hours) 4. Run Isintig –fix –test alltests, now it breaks down. Isintig fails with the following error: Isinteg cannot initiate verification process. Please review the log file for more information. The problem is that there is no log file created. 5. Giving up on this route I decide to do a restore from the backup, it fails with the following error: Unable to read the header of logfile E00.log. Error -501, and the error: Information Store (5976) Callback function call ErrESECBRestoreComplete ended with error 0xC80001F5 The log file is damaged. My conclusion is that E00.log is damage, so how can I repair it so that I can restore the database? Or should I give up and try some other route?

    Read the article

  • How do I diagnose a bottleneck in an Intel Atom based Ubuntu server?

    - by Jon Cage
    I have a small media server at home which has software raid and a gigabit link to the rest of my network. For some reason though, I only get ~10MB/s transfers when copying to/from the server. I use software RAID5 (mdadm) over 4 1TB disks. On top of that I then use LVM to give me a huge pool of disk space which is then split up into multiple partitions which can be resized as and when they need it. I'm guessing this it most likely the cause, but I'd like to know for sure where the root cause is. So, how can I benchmark network throughput (Windows 7 desktop <- Ubuntu server) and hard disk performance to try and identify where my bottleneck might be? [Edit] If anyone's interested, the motherboard is an Intel Desktop Board D945GCLF2. So that's a 300 series Atom processor with the Intel® 945GC Express Chipset [Edit2] I feel like such a fool! I just checked my desktop and I had the slower of the two onboard NICs plugged in so the server is probably not at fault here. Transferring a copy of ubuntu off the server I get ~35-40MB/s according to Windows 7. I'll do those HD tests when I get a chance though (just for completeness).

    Read the article

  • Controller Error: Do I need to worry?

    - by Kryten
    Hi, I have a HP Pavillion dv5224ea Laptop with Windows 7 on it. Recently I discovered a Error in Event Viewer: The driver detected a controller error on \Device\Ide\IdePort1. (more details): - System - Provider [ Name] atapi - EventID 11 [ Qualifiers] 49156 Level 2 Task 0 Keywords 0x80000000000000 - TimeCreated [ SystemTime] 2010-03-07T12:43:07.090197600Z EventRecordID 30198 Channel System Computer Alistair-Win7 Security - EventData \Device\Ide\IdePort1 0000100001000000000000000B0004C002000000850100C00000000000000000000000000000000000000000000000000000000004100000 -------------------------------------------------------------------------------- Binary data: In Words 0000: 00100000 00000001 00000000 C004000B 0008: 00000002 C0000185 00000000 00000000 0010: 00000000 00000000 00000000 00000000 0018: 00000000 00001004 In Bytes 0000: 00 00 10 00 01 00 00 00 ........ 0008: 00 00 00 00 0B 00 04 C0 .......À 0010: 02 00 00 00 85 01 00 C0 ......À 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 00 00 00 00 ........ 0030: 00 00 00 00 04 10 00 00 ........ Event Viewer is recording A LOT of these errors (sometimes 13, one after the other!). Do I need to worry? What does this error mean? What device could "\Device\Ide\IdePort1" be? What is a ATAPI Error? Do I need to re-install Windows? I generally find the occurs when I try to backup my machine (using Windows Backup) or when using a program that uses Volume Shadow Copy. I have run "sfc", no problems. There are no Device Errors in Device Manager. I have also run "vssadmin list writers", no problems. Whats going on??? Would it be a good idea to re-install Windows 7?

    Read the article

  • How to deduplicate 40TB of data?

    - by Michael Stauffer
    I've inherited a research cluster with ~40TB of data across three filesystems. The data stretches back almost 15 years, and there are most likely a good amount of duplicates as researchers copy each others data for different reasons and then just hang on to the copies. I know about de-duping tools like fdupes and rmlint. I'm trying to find one that will work on such a large dataset. I don't care if it takes weeks (or maybe even months) to crawl all the data - I'll probably throttle it anyway to go easy on the filesystems. But I need to find a tool that's either somehow super efficient with RAM, or can store all the intermediary data it needs in files rather than RAM. I'm assuming that my RAM (64GB) will be exhausted if I crawl through all this data as one set. I'm experimenting with fdupes now on a 900GB tree. It's 25% of the way through and RAM usage has been slowly creeping up the whole time, now it's at 700MB. Or, is there a way to direct a process to use disk-mapped RAM so there's much more available and it doesn't use system RAM? I'm running CentOS 6.

    Read the article

  • Dynamic nginx domain root path based on hostname?

    - by Xeoncross
    I am trying to setup my development nginx/PHP server with a basic master/catch-all vhost config so that I can created unlimited ___.framework.loc domains as needed. server { listen 80; index index.html index.htm index.php; # Test 1 server_name ~^(.+)\.frameworks\.loc$; set $file_path $1; root /var/www/frameworks/$file_path/public; include /etc/nginx/php.conf; } However, nginx responds with a 404 error for this setup. I know nginx and PHP are working and have permission because the localhost config I'm using works fine. server { listen 80 default; server_name localhost; root /var/www/localhost; index index.html index.htm index.php; include /etc/nginx/php.conf; } What should I be checking to find the problem? Here is a copy of that php.conf they are both loading. location / { try_files $uri $uri/ /index.php$is_args$args; } location ~ \.php$ { try_files $uri =404; include fastcgi_params; fastcgi_index index.php; # Keep these parameters for compatibility with old PHP scripts using them. fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Some default config fastcgi_connect_timeout 20; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_pass 127.0.0.1:9000; }

    Read the article

  • Spurious alleged file corruption on Windows 7

    - by Johannes Rössel
    Recently my Laptop sometimes warns about corrupted files on the hard drive (Samsung SSD PB22-JS3 TM). This has only happened so far when updating (or checking out) an SVN repository with either TortoiseSVN or the command line Subversion client. The fun thing is that the corrupted file has always been a .svn directory (although the directory entry may contain files in that directory too, if they're small enough?—?which should be the case with SVN). However, when looking into the warned-about directory I notice nothing strange or unusual and don't get any more warnings about it and another try (SVN stops updating once that error occurs?—?TortoiseSVN even with an appropriate error message) of updating the working copy works (well, mostly; sometimes it does it again, albeit with a different directory). Since the laptop is only a few months old I doubt the SSD is failing already—five months of normal usage shouldn't be too surprising. Also it (so far) occurred only with SVN updates on a large repository. Maybe that's too many writes in a short time and some part between the software and the hardware doesn't quite catch up fast enough or so?—?I don't know enough about this to actually make an informed guess here. Anyone knows what's up here? ETA: Note to add: I've run chkdsk (it seems to schedule itself anyway when this happens) and it didn't find anything out of the ordinary.

    Read the article

  • Refresh file access time under Linux / Discard disk read cache

    - by calandoa
    I am making use of the access time to analyse some build process, but it is not working the way I want: the access time is updated the first time I read the file, then it stays the same for a long while, or until the next reboot. For instance: $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 10:03 some_file $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time is updated # waiting a few minutes... $ grep abcdef some_file $ ll -u some_file -rw-r--r-- 1 root root 1.3M 2010-04-07 11:24 some_file # The access time has not been updated :( I suppose that the file is buffered by Linux in the free memory, the only this copy is accessed the subsequent times for speed reasons. A solution would be to discard the buffers in memory. After searching some forums, I found: sync echo 1 > /proc/sys/vm/drop_caches echo 2 > /proc/sys/vm/drop_caches echo 3 > /proc/sys/vm/drop_caches But it is not working, it seems that it only sync up the write buffers, not the read ones. May be it is due to some custom kernel configuration on my distro (fedora 9)? Or I am missing something here? Is there a way to achieve this access time refresh? Note also that I do not want to simulate some writes on my entire file tree. Because I am using some makefile based build system, this will cause the entire project to be build again.

    Read the article

  • PHP & IIS 6 Encoding problem

    - by Alexander
    The server is running Windows 2003 with IIS 6.0.3790.1830 x86 (iis.dll). My database server is Microsoft SQL Server 2000. My PHP version is 5.3. The original application is hosted on appserv1 and it's database is on dbserv1. It's working fine, everything is tuned up, running great. It was needed to place the same application (different modules) on another server, for other uses, so I copied the database on dbserv2, configured appserv2 to host the application, so I achieved 2 almost identical copies. Both dbserv1 and dbserv2 use the same encoding, both appserv1 and appserv2 are on IIS6 with the same PHP configurations. I also tried my best to have the same settings in the IIS. I also made sure that I pass the encoding information both in the HTTP headers and in the meta tags with http-equiv. Both applications use utf-8. The Problem is that the copy of the application doesn't display the non-ASCII characters normally in the browser, even if the browser detects correctly the UTF-8 encoding of the page. First I thought it was a database issue, given the fact that MSSQL 2000 doesn't support UTF-8, and instead it uses UCS-2, but when I redirected the application on appserv2 to work with the database on dbserv1, it had the same encoding problems. This is why I am asking in what way I can make it work. thank you for reading.

    Read the article

  • How does the LeftHand SAN perform in a Production environment?

    - by Keith Sirmons
    Howdy, I previously asked this ServerFault question: Does anyone have experience with lefthands VSA SAN The general consensus looks like it does not perform well enough for a production SQL server even at a light load. So the new question is, How does LeftHand's SAN perform on the HP or Dell dedicated Hardware boxes? We are looking at the Starter SAN with 2 HP nodes in a 2-way replication, 2 ESX servers hosting a total of 2 Active Directory server, 1 MS SQL server, 1 File Server, and 1 General Purpose Server for things like Virus Scan (All Microsoft Server 2005 or 2008). The reason I am looking at LeftHand is for the complete software package. I plan to have a DR site and like how the SAN can perform an Async Replication to the offsite location without having to go back to the Vendor for more licenses. I also like the redundancy built into the Network Raid architecture. I have looked at other SANS and found different faults with them. For example, Dell's EqualLogic: Found that although the individual box is very redundant in hardware, the Data once spanned across multiple boxes is not redundant, if a node goes down you have lost the only copy of the data sitting on that hardware (One thing is certain, all hardware fails... When? is the only question.). I have used an XioTech SAN as well.. Well worth the money BTW, but I think it is overkill for the size of the office I am targeting. The cost to get the hardware redundancy in the XioTech makes it a little out of reach for the budget I am working in. Thank you, Keith

    Read the article

  • How to install service packs onto Thin Apps and other questions

    - by Buzlightyear
    I have a VM that I use to do all my ThinApps, Once I have done my application and its built I copy just the bin folder to a server which has a share, I then create a shortcut to the program exe on users desktops. This all seems to work ok. Firstly is this correct? If so, how then do I apply a service pack to that application after the ThinApp VM has been reverted to a base image, I have corel draw which i have a service pack for but it wont intall it as its saying it cant find the package installed on the computer. I have Thinreg batch file so have included corel into that and the program appears in the program list. I have the sandbox redirected to the server share mentioned earlier into seperate user folders when the user logs in. Obvisouly once i have done a App I revert to a previous snapshop so my thinapp vm is back to base image, so the corel install is lost. Sorry if im missing something very simple. Is its a case if you have an exe/msp you have to reinstall the whole app again and start over, Thanks for any help.

    Read the article

  • Problem importing pictures from a digital camera with Windows 7

    - by snark
    Hi, Since I'm using Windows 7 (Beta, then RC, now RTM), I have issues when I download pictures from my digital cameras. It happens with my 2 cameras: a Canon Powershot S2 IS and a Canon Ixus 80 IS. When I plug a camera (any of them) to a USB port and switch it on in Play mode, the Autoplay function of Windows 7 starts with this screen: I select "Import pictures and videos" to call the native Windows 7 tool. It searches a bit for pictures to download from the camera and starts the transfer. However, during the transfer, I often get errors like this one: If I use "Try again", it works fine the second time and the picture is retrieved correctly. It's very annoying when it happens 20 or 30 times in a 500-picture download. I cannot leave it running standalone, as I have to watch for the errors and click on "Try again". Any idea what is causing these errors? I tried changing the USB port (normally the cameras are connected via a USB hub but it happens also when I connect them directly to a MB USB port) and the USB cable, but no success. I also checked the SD card by connecting them with a card reader and running a ChkDsk on them but it found no errors on the cards. Update: No problem when I copy the pictures manually with the Windows Explorer. And no problem either when I access the card with a reader. The builtin import tool of Windows is convenient as it sorts the pictures automatically by date (1 folder per day). And this is the way I sort my pictures

    Read the article

  • How to "swap in" again memory from page file to physical memory in Windows at once (like linux swap-off)

    - by Arnout
    Is there a way to swap back in (to put back all the memory data that was put into the page file (or swap, whatever you prefer)) memory on a windows PC? On linux, one can easily do this with the swapoff /dev/sdaX, where X is the swap partition. On windows, it seems to ask me to reboot each time.. The reason I'd like to do this, is that, even though swapping out the data to the swap file allows me to play a resource-hungry game fully in physical ram, when I stop the game, all the rest of my programs run slow. This is or course normal; all the programs were pushed into the page file because my RAM was too small, and all memory access to those programs after gaming bumps into hard page faults, with major delays and some frustration as a consequence. However, that frustration could easily be avoided, by simply allowing the PC to copy all data back into the physical memory for a minute or so, and then resume working on a fast working PC! (rather than having to endure the slowness -while- working) Thanks in advance for any advice on this! Kind regards

    Read the article

  • How to create VHD disk image from a Linux live system?

    - by Federico
    Once more, I have to resort at the experts here at SuperUser, as my other sources (mainly Google ;-)) didn't prove very helpful... So basically, I would like to create a VHD image of a physical disk to be archived/accessed/maybe even mounted in a virtual machine. Now, there are dozens of articles and tutorials on how to do that on the web, but none that meets exactly the conditions I would like to achieve: I would like the destination file to be a VHD image, as Windows 7 can mount it natively, even over the network and many other programs can use it (VirtualBox, ...) The disk I'm trying to image contains a Windows XP install, so in theory, I could use the disk2vhd utility, but I would like to find a solution that doesn't require booting that Windows XP install (ie keep the disk read-only) Thus I was searching for a solution involving some sort of live system (running from a USB stic or the network) However, all the solutions that I've came across either make use of disk2vhd or use the dd command under linux, which does a complete copy of the disk (ie even empty blocks) and does not output a VHD file. Is there a tool/program under Linux that can directly create a VHD file? Or is is possible to convert a raw disk image created using dd to a VHD file, without allocating space for the empty blocks? How would you proceed? As always, any advice or comment is highly appreciated!!

    Read the article

  • Since upgrading to Windows 8.1, I can't open any files on a SMB share shared by my OS X Mavericks Mac

    - by Gary
    I have a PC with Windows 8.1 and a Mac with Mavericks. I have a folder on the Mac that is shared with the PC. When I'm on the PC and I try to open a file that is shared by the Mac, such as an ISO file (a disk image), then I get a message saying that I cannot open the file, or the file is in use (it depends on the app/filetype). I have the same problem when I open a video file. Strangely, text files and PDF files are just fine. And if I copy any of the problematic files to the local Windows disk, then I can open them just fine. The specific error messages are: AVI files opened in VLC: "Your input can't be opened. VLC is unable to open the MRL." ISO files opened by Windows Explorer: "Sorry, there was a problem mounting the file." This only started happening after I upgraded to Windows 8.1 on the PC and Mavericks on the Mac. Mavericks upgraded its SMB version from SMB1 to SMB2, so perhaps that is related? Does anyone know what the problem might be, and how I could fix it? Thanks in advance!

    Read the article

  • Rsync: General file/folder synchronization

    - by Rey Leonard Amorato
    I have a file server, which is in-charge of pulling a folder tree from multiple workstations on a daily basis. My current method for this is by using rsync, (which works pretty well provided directory names and/or files remain the same) however, when files are renamed or moved about within subdir1, rsync will copy them over to the server, creating duplicates. I have to manually find and delete extraneous files/folders that had been left on the server during previous syncs. Note that I cannot use rsync's --delete flag because any sync from a workstation will then mirror that particular folder tree, instead of merging them to the server. Visual diagram: Server: Workstation1 Workstation2 Workstation(n) Folder* Folder* Folder* Folder* -subdir1 -subdir1 -subdir1 -subdir(n) -file1 -file1 -file2 -file(n) -file2 -file(n) Is there a simple script (preferably in bash, nothing fancy) that can accomplish the deletion of the extraneous files/folders in the event a file is renamed or moved to a different subdir? Is there a different program, much like rsync that can accomplish this task autonomously and in a much simpler manner? I have looked at unison, but I did not like the fact that it keeps a local database for the syncing info. Any tips at all as to how I am supposed to tackle this? Thank you in advanced for your help. EDIT: I have tried unison just recently and I can safely say it is out of the question now. unison is a bi-directional synchronization tool and from my testing, it mirrors the files existing on the server to all workstations. - This is unwanted. preferably, i would want files/folders to stay within their respective workstations and just merge to the server. AKA uni-directional sync; but with renames/moves propagated to the server. I might have to look into Git/Mercurial/Bazaar as mentioned by kyle, but still unsure if they are fit for the job.

    Read the article

  • How to HIDE "client denied by server configuration:" error in log

    - by Keith
    I want to block access to my web server by default as a precaution but I keep getting the following errors showing up in my error log. [Wed Jun 27 23:30:54 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Wed Jun 27 23:32:40 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Wed Jun 27 23:35:39 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar [Thu Jun 28 01:01:17 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 02:34:57 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxy.php [Thu Jun 28 05:41:33 2012] [error] [client 58.218.199.227] client denied by server configuration: /home/www/default/proxyheader.php [Thu Jun 28 06:55:10 2012] [error] [client 180.76.6.20] client denied by server configuration: /home/www/default/ [Thu Jun 28 07:31:26 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Edu.jar [Thu Jun 28 07:32:25 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/REST.jar [Thu Jun 28 07:36:10 2012] [error] [client 86.77.20.107] client denied by server configuration: /home/www/default/Set.jar I don't really want these errors to show up but whatever I do, I can't get rid of them. Does anyone know how I can achieve this? Here is a copy of my configuration. <VirtualHost *:80> DocumentRoot /home/www/default <Directory /> AllowOverride None Order Deny,Allow Deny from all </Directory> #ErrorLog /var/log/apache2/error.log #LogLevel warn CustomLog /var/log/apache2/access.log combined </VirtualHost>

    Read the article

  • Can't get .htaccess to work

    - by orokusaki
    I'm using Apache2 on Ubuntu Lucid Lynx. I have config set to use .htaccess like normal. This is my default site: <VirtualHost *:80> ServerAdmin webmaster@localhost DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride All </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride All Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride All Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Allow from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> </VirtualHost> I've tried lower case "all" (AllowOverride all) as well. My .htaccess file looks like this: //Rewrite all requests to www Options +FollowSymLinks RewriteEngine on RewriteCond %{HTTP_HOST} ^mydomain.com [nc] RewriteRule ^(.*)$ http://www.mydomain.com/$1 [r=301,nc] //301 Redirect "old_junk.html" File to "new_junk.html" Redirect 301 /old_junk.html /new_junk.html //301 Redirect Entire Directory "old_junk/" to "new_junk/" RedirectMatch 301 /old_junk/(.*) /new_junk//$1 // Copy and paste redirect examples from above: (with mydomain replaced with my actual domain... and my computer is plugged in)

    Read the article

  • My Western Digital 500GB Passport disk says "not formatted" when I plug it in Windows

    - by learnerforever
    Hi, When I plug my Western Digital 500 GB Passport disk in my Windows machine, it says "not formatted, do you want to format it" something. I started having this problem after I put it in an old desktop at home. I don't exactly know what went wrong. May be partition table is corrupted etc. Questions: Some quick search on internet tells me there are partition fixing utilities, which can fix corrupted partition table. testdisk being one of such utiities. I can understand how to use this to copy files from the disk to some other location, but I would like to fix the partition table in-place so that I don't have to temporary move around my data of approx size 300GB, then format passport disk and then again bring back the data. Is there any way I can fix the partition table in-place? Also, how to know which file system was there originally in the disk? Can I only keep the same file system? My current laptop is running Windows 7. Earlier I used to use Windows Vista. My other laptop has Windows XP. So I have access to Windows 7 and Windows XP. Please help! Thanks,

    Read the article

  • Setting up a dualboot by installing cloned partitions using clonezilla

    - by Nimjox
    I'm trying to setup a dual boot system where I have Windows 7 and Linux Mint. Here's the kicker both are partitions I've saved using Clonzezilla from different places and to make matters worse Linux Mint is formated as a LVM. I need both of these images specifically as windows is a corporate image that I must use and the other is a development image that took me a week to setup. I've gotten it almost all working but my issue is that I can't get clonezilla to not mess up the partition table of Windows when installing Mint or vise-vera. I can use the (-k1 option) which doens't copy the partition table but then I have a unusable partition when it clones and I'm not sure how to fix the partition table. Here's what I'm doing: Using Gparted to make partitions sda1 40GB ntfs (windows), sda2 extended 70GB, sda5 lvm2 pv 69.99 GB (Linux), sda3 500MB (GRUB) Clonezilla windows image into sda1 partition (keeping partition table) Clonezilla linux image into sda5 partition (not recreating partition table) After all that I can boot into windows using the default MBR. I can use rescue-repair cd to reinstall GRUB which will see Windows 7 but I can't get it to see the Linux OS. I'm thinking its because of the sda5 partition but I'm not sure any ideas on what I could do to get this working or where I might be going wrong. If there is any additional detail you need please let me know and I'll edit as this is a lot.

    Read the article

  • Python coding with VLC player (quite a basic query I expect)

    - by Todd
    I'm fairly new to the whole coding realm so my knowledge is fairly limited, and I can't seem to find any basic tutorials on how to use scripts with VLC player. More specifically, the reason I'm asking here is because I stumbled across a post on this site about playing random clips from random videos on VLC player automatically. This is the forum post: Playback random section from multiple videos changing every 5 minutes My situation is similar to this lovely gentleman's was, though he clearly knows a lot more about coding than I do. In short, I'd like to copy this coding into a file of some sort and apply it to VLC player myself. Only I'm not sure what file type I'd have to save it as (I have Python by the way, and I tried saving it as a .py file but I didn't know if it was correct or where to go from there). Additionally, I'm not sure how to get VLC to "read" the script, so to speak - is there a specific location the file needs to be, and do I run the script from another program or through VLC? I'll reiterate that I'm relatively new to this, so if anybody would be so kind as to post a quick list of steps on how to save/place the file and use it with VLC player I really would appreciate it! P.S. I'm not computer illiterate, I'm fine with most programs and I'd understand if you just said things like "C:\Program Files (x86)\VideoLAN\VLC\plugins" or "in VLC, select Tools Plugins and extensions", I just wouldn't catch on to anything about adding a line of coding that does something without being told exactly what to write! Many thanks in advance! :) Todd

    Read the article

  • Mac Mail sent messages not in sent folder

    - by Elizabeth Akers
    Hi. My powerbook G4 has been lobbying for a furlough recently, and its latest prank is to not show sent messages in my sent mailbox. I have always had a copy of every sent message, and now, for some unknown reason, I can only see sent messages from May 7 and earlier. When I did a test, and sent myself an email, it shows up fine in my inbox, but there's nothing in the sent folder. Weird. It also has the little arrow next to messages that I replied to in the inbox. I guess I'm going to have to cc myself on everything until I get this fixed. Also, it's got another new trick, which I'm assuming is unrelated, but I mention it here just in case it is: the battery says it's "good" and has a cycle count of 0, and a full charge capacity of 2139, but for some reason it's all of a sudden not charging when it's plugged in. It works fine when connected to the wall, but there is no battery life at all. Zip. Any thoughts? Is it just time for a new 'book? This one just got back from Apple, having its harddrive replaced for the fourth time (for free, but seriously, this thing is a lemon). Thanks so much, Elizabeth

    Read the article

  • Is it possible to mount a disk image, created with dd, to a directory on a mounted external usb hdd?

    - by Keeper Hood
    I have an image of my home (/dev/sda3) partition, which I've created using the "dd" command. dd if=/dev/sda3 of=/path/to/disk.img I've deleted the home partition via gparted in order to enlarge my /dev/root partition. Then I've recreated the /dev/sda3 partition which is smaller in size then the one I've backed up to the image. I was wondering since I have a 2TB external HDD, could it be possible to mount my backed up image on the external HDD and then copy the files into the /home directory. Since the external HDD would be already in a "mounted state", I'm unsure whether this is a good idea, mounting on a mounted device. I'm running Slackware 13.37 (64bit). used ext4 on all the partitions. resized the root partition with gparted live cd. I've tried mount -t ext4 /path/to/disk.img /mng/image -o loop It gave me an fs error (wrong fs type, bad option, bad superblock on dev/loop/0) Then i did dmesg | tail which outputs: EXT4-fs (loop0) : bad geometry: block count 29009610 exceeds size of defice (1679229 blocks) I have no idea what to do, I want to restore my /home data from the image I've backed up.

    Read the article

  • Virtualmin & git integration

    - by weby3456
    I've installed virtualmin on my VPS to manage my websites. It's working perfect and as expected nearly a year now. Recently I wanted to add some features to one of my sites, and I need git integration. I've correctly installed git & gitweb on my server, and I can create repositories and watch them under http://sub.domain.com/git/gitweb.cgi Here is the current relevant directory tree: /home/user/domains/sub.domain.com/public_html/git/ drwxr-sr-x user user . drwxr-x--- user user .. -rw-r--r-- user user git-favicon.png -rw-r--r-- user user git-logo.png -rwxr-xr-x user user gitweb.cgi -rw-r--r-- user user gitweb.css drwxrwx--- apache user reponame.git /home/user/domains/sub.domain.com/public_html/git/reponame.git/ drwxrwx--- apache user . drwxr-sr-x user user .. drwxrwx--- apache user branches -rwxrwx--- apache user config -rwxrwx--- user user description -rwxrwx--- apache user HEAD drwxrwx--- apache user hooks drwxrwx--- apache user info drwxrwx--- apache user objects drwxrwx--- apache user refs But I have some questions: When I'm visiting http://sub.domain.com/git/gitweb.cgi, the owner is listed as 'Apache'. why? how can I change that? Usually, to create a new git repository, I'll do something like: $ mkdir proj $ cd proj $ git init Initialized empty Git repository in /home/user/proj/.git/ // here I'm creating the files or copy them from somewhere else $ git add *.php $ git add README $ git commit -m 'initial version' But after creating the repository in virtualmin, I can find a new dir named 'reponame.git' but not the '.git' dir. When I'm trying to run any git command (e.g. git status) I'm receiving "fatal: This operation must be run in a work tree". How can I work with that repository? Currently I need to explicitly grant access for users to be able to view the repositories via gitweb. How can I make certain repositories public?

    Read the article

< Previous Page | 351 352 353 354 355 356 357 358 359 360 361 362  | Next Page >