Search Results

Search found 6355 results on 255 pages for 'slow downs'.

Page 201/255 | < Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >

  • how could application installations/configurations be easier in linux?

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • Cut (smart edit) .mts (AVCHD Progressive) files un Ubuntu Lucid

    - by pts
    I have a bunch of .mts files containing AVCHD Progressive video recorded by a Panasonic camera, and I need software on Ubuntu Lucid with which I can remove the boring parts, and concatenate the interesting parts, all this without reencoding the video stream. It's OK for me to cut at keyframe boundary. If Avidemux was able to open the files, it would take about 60 hours of work for me to cut the files. (At least that was it last time I tried with similar videos, but of a file format supported by Avidemux.) So I need a fast, powerful and stable video editor, because I don't want that 60 hours of work go up to 240 or even 480 hours just because the tool is too slow or unstable or has a terrible UI. I've tried Avidemux 2.5.5 and 2.5.6, but they crash trying to open such a file, even if I convert the file to .avi first using mencoder -oac copy -ovc copy. mplayer can play the files. I've tried Avidemux 2.6.0, which can open the file, but it cannot jump to the previous or next keyframe etc. (if I make it jump to the next keyframe, and then to the previous keyframe, it doesn't end up at the original keyframe, sometimes displays an error etc.). Also I'm not sure if Avidemux 2.6.x would let me save the result without reencoding. I've tried Kdenlive 0.7.7.1, but playback is very choppy, and it cannot play audio at all (complaining that SDL cannot find the device; but many other programs on the system can play audio). It would be a pain to work with. I've tried converting the .mts file to .mkv using ffmpeg -i input.mts -vcodec copy -sameq -acodec copy -f matroska output.mkv, but that caused too much visible distortions in the video in both mplayer and Avidemux. I've tried converting the .mts file with TsRemux.exe, but Avidemux 2.5.x still can't open that file. Is there another program to cut and concatenate the files? Is there a preprocessor which would create a file (without reencoding the video) on which Avidemux wouldn't crash?

    Read the article

  • VMware NAS/iSCSI recommendations - smallish organization

    - by Bubnoff
    I have two VMware servers - ESX + ESXi. Two backup NAS boxes. The current NAS boxes are low-cost and unsuitable for running VMs from. Support NFS only. Slow. My plan is to have a dedicated iSCSI/NAS for storing and running VMs. Two additional low-cost boxes for backup. I'm looking for advice regarding 2 things really: Recommendations as far as VMware architecture/design for a smaller organization. Less than 20 Virtual Machines. 2 servers + 2 x 1.5 terabyte backup NAS boxes. A good NAS/iSCSI box with your recommendation on RAID config ...I would go with 6 or better. I'm trying to design an installation that is both fast and reliable/redundant. If you have any experiences to share or your current configuration including network design ( switches, fiber ...etc ), I will be enormously thankful. I'm not married to this idea, so if you have a design not using iSCSI NAS boxes ...let er rip. Cost? Can we stay around $5,000 ( on top of already stated components )? Links to info are welcome also. Thanks for reading! Bubnoff

    Read the article

  • Access denied to external USB disk; update access rights fails in Windows 8

    - by gerard
    I use to work with 2 laptops (Windows vista and Windows 7), my work files being on an external usb disk. My oldest laptop broke down, so I bought a new one. I had no option other than take Windows 8. I suspect something changed with access rights, as my external disk suffered some "access denied" problem on Windows. I was prompted (by Windows 8) somehow to fix the access rights, which I tried to do, getting to the properties - security. This process was very slow and ended up saying disk is not ready Additionally, my external usb disk somehow was not recognized anymore. Back to Windows 7, I was warned that my disk needed to be verified, which I did. In this process, some files were lost (most of them I could recover from the folder found00x, but I have some backup anyway). Also, I don't know why, but under Windows 7, all the folder showed with a lock. Then back again to Windows 8. Same problem : access denied to my disk + no way to change access rights as it gets stuck disk is not ready". Now I am pretty sure there is some kind of bug or inconsistency in Windows 8 / Windows 7. I did 2. and 3. a few times. At some point, I also got an access denied in Windows 7. I could restore access rights to the disk to "System" (properties - security - EDIT for full control to group "system". ). But then I still get the same access right pb on Windows 8, and getting stuck in the process to restore full control to "system" -- and "admin" groups. I upgraded Windows8 with the Windows8 updates available. Does not help.

    Read the article

  • Windows Server 2008 can't start postgresql-x64-9.0 service: could not create any TCP/IP sockets

    - by Rob
    After rebooting a Windows Server 2008 machine to apply system updates, we recently we began having some issues running PostgreSQL 9.0. When we noticed the problem, we reverted the Windows updates, but the issue persists: From services.msc, attempting to start the postgresql-x64-9.0 service fails. Half-way through starting the progress bar becomes very slow, and eventually responds with error 1053; "the service did not respond in a timely fashion." Interestingly enough, bringing up the task manager shows multiple instances of postgres.exe have been started, and looking at the log file shows: 2011-02-10 14:44:02 ESTLOG: database system is ready to accept connections I then tried killing the processes, and starting via the command-line (as the user postgres), but I receive a different error: C:/Program Files/PostgreSQL/9.0/bin/pg_ctl.exe start -N "postgresql-x64-9.0" -D "F:/SHARE/postgres" -w waiting for server to start............................................................... pg_ctl: could not start server ESTWARNING: could not create listen socket for "192.168.0.101" ESTFATAL: could not create any TCP/IP sockets The log file again indicates that the database is ready to accept connections. Also, using netstat indicates that no other processes are using port 5432; I can't think of any other obvious reason that opening the listen socket might fail. Any help would be greatly appreciated.

    Read the article

  • Rate limiting bandwidth per IP

    - by Yohan
    First, I am not that good with computer. I even had problem with Windows PC. Right now I own a restaurant which happened to offer free internet. My ISP has my connection setup using a Ubuntu 11.1 box. IP Address is 192.168.1.16 with netmask 255.255.0.0, dns is 192.168.1.1 and gateway is 192.168.1.1. My problem is that my customers complains all day about slow network. When I received that kind of complain, the first thing came to my mind is to scout my area and find out who is the culprit, and ask him not to waste our bandwidth. Now, it is getting bored scouting people around, and I need to implement to my Linux box to limit bandwidth. I don't care if their provider can't be faster, but I want to limit 70kbit for each person. More annoying are people who use flashget and torrents. Usually they consume the biggest bandwidth. My question, how can I limit that? Please guide me in easy way. I've spent few days reading tc documents but doesn't understand a thing. I am using Ubuntu 11.10 Basically, I want all my customer get 70kbps each, no matter what.

    Read the article

  • Getting rid of your server in a small business environment

    - by andygeers
    In a small business environment, is it still necessary to have a central server? Speaking for my own company (a small charity with about 12 employees) we use our server (Windows Server 2003) for the following: Email via Microsoft Exchange Central storage Acting as a print server User authentication / Active Directory There are significant costs associated with running a server like this: Electricity, first for the server itself then for the air conditioning required (this thing pumps out a lot of heat) Noise (of which there is a lot) IT support bills (both Windows Server and Exchange are pretty complicated, and there are many ways they can go wrong) I've found ways to replace many of these functions with cheaper (better?) alternatives: Google Apps / GMail is a clear win for us: we have so many spam related problems it's not even funny, and Outlook is dog slow on our aging computers You can buy networked storage devices with built in print servers, such as the Netgear ReadyNAS™ RND4210 that would allow us to store/share all of our documents, and allow us to access printers over the network The only thing that I can't figure out how to do away with is the authentication side of things - it seems to me that if we got rid of our server, you'd essentially have a bunch of independent PCs that had no shared pool of user accounts / no central administrator. Is that right? Does that matter? Am I missing any other good reasons to keep a central server? Does anybody know of any good, cost-effective ways of achieving the same end but without the expensive central server?

    Read the article

  • Terminal Server in Windows Server 2003

    - by Hemal
    I have a confusion regarding what I am doing here. At present I have a Windows Server 2003 server with SP2. I have assigned RAS/VPN server role to it (through Manage my server wizard) and in my router, I setup the IP address of my RAS/VPN server as PPTP server. Staff leave their workstations ON all the time and access them from home through RDP. They first connect thorugh VPN & in the RDC they simply type their respective IP or computer name to access the office network from home. Everything works fine so far except: Staff have to leave compuers always ON in the office Speed is very slow depend how many staff members access the VPN network I was told to install and configure Terminal service to improve this situation. I already added TS Role in the server but I don't know how to clients can access the TS server from home or remote location. I really appreciate any good links or guidence from the experts in this group regarding this. Thank you in advance for any replies!

    Read the article

  • configuring lighttpd for large downloads

    - by ahmedre
    i run a web site that hosts pages that are just general scripts (php, etc) and mp3 downloads (some of which are fairly large - up to 200mb). i am running lighttpd on the servers on linux (ubuntu 64). everything is fine, but under high load, the server is not accessible (or very slow - even sshing in takes a while), and i am guessing this is due to a huge number of mp3 downloads at that time. consequently, dns sees the server as down and redirects all the traffic to the other servers, and after a while, it comes back up and things work again. so what's the best way to fix this? ideally, i want the server to continue running (and the web pages - php etc - to always work, but downloads don't always have to work). should i just have 2 web servers running (one for the downloads and one for the php pages), or is it perhaps something i can fix in my lighttpd configuration? here are the snippets from my configuration: server.max-worker = 4 server.max-fds = 2048 server.max-keep-alive-requests = 4 server.max-keep-alive-idle = 4 server.stat-cache-engine = "fam" fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "idle-timeout" => 20, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "64", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) # normal php site $HTTP["host"] =~ "bar.com" { server.document-root = "/usr/local/www/sites/bar.com/" accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/bar.log" } # download site $HTTP["host"] =~ "(download|stream).foo.com" { server.document-root = "/home/audio/" dir-listing.activate = "enable" dir-listing.hide-dotfiles = "enable" evasive.max-conns-per-ip = 1 evasive.silent = "enable" # connection.kbytes-per-second = 256 accesslog.filename = "|/usr/sbin/cronolog /var/log/lighttpd/%m/%d/%H/download.log" }

    Read the article

  • Network access lags for Win7 when server network utilization is high

    - by Jeff Miles
    We have a Dell PE2950 file server running Windows 2008, hosting a DFS namespace of ~1.2 TB. This server has two Broadcom 1Gbps NICs teamed together. When there is high traffic going to the server across the network (greater than 200 Mbps), any Windows 7 client accessing a DFS share at the time experiences severe performance problems. For example: Computer A has an AutoCAD drawing opened directly from the DFS share. Performance is normal, not causing any issues. Computer B begins a file transfer, putting a 11GB file onto a different DFS namespace, on the same server Computer A immediately notices lag while using AutoCAD. The cursor momentarily freezes within AutoCAD every 10 seconds or so, and any browsing of the DFS share is extremely slow. Computer B completes file transfer, and performance resumes to normal for Computer A. This is only affecting Windows 7 clients, using a variety of hardware (desktop + laptop). All of our Windows XP clients see no performance impact during the file transfer. Things I have tried with no change: Had Computer A work from an entirely different RAID array from the file transfer destination Updated NIC drivers on clients and server Enabled TCP offload and receive side scaling on the server NIC (previously disabled when the issue began) Antivirus disabled during file transfer I am currently having a user test applications other than AutoCAD when the file transfer occurs, and will update the question with that result. Does anyone have any recommendations for resolution or additional troubleshooting steps?

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • How to Set up MySQL Server to utilize more memory

    - by Cyril Gupta
    Hi there, I have MySQL setup on Windows along with Plesk. The version is 5.0.45 Community. The databases I have on the server are MyISAM as well as InnoDb, but predominantly innodb. I had 8G memory on my server, but MySQL isn't going up more than 1.3G and tweaking the settings isn't helping. I tried to increase the memory allocation for innodb_buffer_pool_size, it works if I set it up to 1G, but if I set 2G, or above the server doesn't come back online! I want mySQL to use at least 5-6 Gigs of the memory I have for performance, but I can't get this to work. Can anyone please help? My mysql config file is below (there are 2 mysqld sections... when i used MySQL workbench it created another one!) [MySQLD] port=3306 basedir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL datadir=C:\\Program Files (x86)\\Parallels\\Plesk\\Databases\\MySQL\\Data default-character-set=latin1 default-storage-engine=INNODB query_cache_size=128M table_cache=1024 tmp_table_size=32M thread_cache=32 myisam_max_sort_file_size=100G myisam_max_extra_sort_file_size=100G myisam_sort_buffer_size=2M key_buffer_size=32M read_buffer_size=16M read_rnd_buffer_size=2M sort_buffer_size=8M innodb_additional_mem_pool_size=24M innodb_flush_log_at_trx_commit=1 innodb_log_buffer_size=10M innodb_buffer_pool_size=1G innodb_log_file_size=10M innodb_thread_concurrency=8 max_connections=700 key_buffer=48M max_allowed_packet=5M sort_buffer=2M net_buffer_length=4K old_passwords=1 wait_timeout=20 connect_timeout=60 [client] port=3306 [mysqld] query_cache_min_res_unit = 4096 innodb_additional_mem_pool_size = 1048576 innodb_buffer_pool_size = 1G query_cache_limit = 1048576 key_buffer_size = 8388608 sort_buffer_size = 2097144 query_cache_type = 1 query_cache_size = 312M log-slow-queries connect_timeout = 5 wait_timeout = 20 thread_cache_size = 15 read_buffer_size = 131072 table_cache = 64

    Read the article

  • Can MySQL use multiple data directories on different physical storage devices

    - by sirlark
    I am running MySQL with its data dir on a 128Gb SSD. I am dealing with large datasets (~20Gb) that are loaded and processed weekly, each stored in a separate DB for the purposes of time point comparisons. Putting all the data into a single database in unfeasible because the performance on such large databases is already a problem. However, I cannot keep more than 6 datasets on the SSD at a time. Right now I am manually dumping the oldest to much larger 2Tb spinning disk every week, and dropping the database to make space for the new one. But if I need one of the 'archived' databases (a semi regular occurrence) I have to drop a current one (after dumping), reload it, do what I need to, then reverse the results. Is there a way to configure MySQL to use multiple data directories, say one on the SSD and one on the 2Tb spinning disk, and 'merge' them transparently? If I could do this, then archiving would no longer mean "moved out of the database entirely", but instead would mean "moved onto the slow physical device". The time taken to do my queries on a spinning disk would be less than that taken to completely dump, drop, load, drop, reload two entire databases, so this is a win. I thought of using something like unionfs but I can't think of a way to control which database gets stored on which physical drive, because it works by merging on a directory level (from what I understand) so I'm still stuck with using multiple directories. Any help appreciated, thanks in advance

    Read the article

  • Edubuntu video playback and apt-get

    - by asdasd
    They had installed some modified edubuntu's at school... So i have some questions about setting some things up: How we can play HD videos ? They are made for windows machines and are in .wmv format but we need to play them on our multimedia class but don't know how - which player, which codecs etc. How to edit properly the /etc/apt/sources file ? Anything we try to install via apt-get it just says that E:\ is not available. Please tell me which repositories to put in there so we could be able to install some tools. Where are usually viruses/trojans put in ubuntu ? I mean in which directories ? Because our computers are behaving really slow and we need to check for some malware manually - we are not even allowed to install any kind of AV software. So tell me the usual directories and places for hiding such files, how are they hiddem, how to recognize them etc. Any others nice tricks/tips that we need to know. Thank you very much in advance.

    Read the article

  • Roaming Profiles & Redirected Folders - storage consumption? offline files and caching?

    - by Ben Swinburne
    I understand the concepts of both roaming profiles and folder redirection and have used both separately before. I am about to set up a network from scratch and would ideally like to use both for the following reasons primarily Roaming profiles allow users to log on to any machine and have their profile Redirected profiles allow users to have their My Documents and Desktop etc backed up without the need to log off at the end of the day. The servers can run their backups overnight and there are no missing files due to the user not logging off. Redirected profiles largely alleviate the slow log in times caused by large profiles. My question is if some of the folders are redirected and therefore not part of the roaming profile what happens on machines which truly roam (i.e. laptops)? If there's offline files or a cache does this mean that the problem whereby a user has to log off comes back? By having them both enabled, is there any duplication i.e. if I have a users$ share and a profiles$ share would I have Desktop twice for example?

    Read the article

  • Mongo daemon restarts after replica set issue

    - by Matt Beckman
    We had a recent election in our replica set (2 read nodes; 1 write node) that changed the primary node. Curious as to why this occurred, I started looking through the logs to find out what happened. It appears that mongoNode2 could not communicate with mongoNode3. When both nodes could not communicate, it appears that this caused the services on mongoNode2 and mongoNode3 to restart, eventually resulting in a new primary after the services had been started again. Thu Jun 23 08:27:28 [ReplSetHealthPollTask] DBClientCursor::init call() failed Thu Jun 23 08:27:28 [ReplSetHealthPollTask] replSet info mongoNode3:27017 is down (or \ slow to respond): DBClientBase::findOne: transport error: mongoNode3:27017 query: { \ replSetHeartbeat: "myReplSet", v: 3, pv: 1, checkEmpty: false, from: \ "mongoNode2:27017" } Thu Jun 23 08:27:29 got kill or ctrl c or hup signal 15 (Terminated), will \ terminate after current cmd ends Thu Jun 23 08:27:29 [interruptThread] now exiting Thu Jun 23 08:27:29 dbexit: Is there any reason that the mongo service would restart due to a DBClientCursor::init call() failure? Is this a known bug? It should be noted that mongoNode2 and mongoNode3 are VMs on the same VMware host. MongoNode1 is not on the same host, and it did not have any issues with the service. However, I did not have any other reports of issues with other VMs on the VMware host.

    Read the article

  • This .mpg video clip doesn't play well

    - by Roey
    I've installed K-lite mega codec pack v6.9.0 with playback essentials without player. My default and only media player is windows media player. here are the clip's media info: General Complete name : D:\Users\Roey\Downloads\B384MV.mpg Format : MPEG-PS File size : 273 MiB Duration : 4mn 59s Overall bit rate : 7 643 Kbps Video ID : 224 (0xE0) Format : MPEG Video Format version : Version 2 Format profile : Main@High Format settings, BVOP : No Format settings, Matrix : Default Format settings, GOP : M=1, N=15 Duration : 4mn 57s Bit rate mode : Variable Bit rate : 7 363 Kbps Nominal bit rate : 9 000 Kbps Width : 1 920 pixels Height : 1 080 pixels Display aspect ratio : 16:9 Frame rate : 25.000 fps Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Compression mode : Lossy Bits/(Pixel*Frame) : 0.142 Stream size : 261 MiB (96%) Audio ID : 192 (0xC0) Format : MPEG Audio Format version : Version 1 Format profile : Layer 3 Mode : Joint stereo Duration : 4mn 59s Bit rate mode : Constant Bit rate : 128 Kbps Channel(s) : 2 channels Sampling rate : 44.1 KHz Compression mode : Lossy Stream size : 4.56 MiB (2%) Menu When I play it there is no sound (just a little "kahhhh" noise every 10-20 seconds) and the frames are moving very slow - it "jumps" frames. A blue tray icon [FFa] "ffdshow audio decoder" pops with the following details: Input:MP3, stereo, 44100 Hz (libavocodec) Output:PCM, stereo, 44100 Hz, 16-bit integer Any help will be much appreciated. Thanks

    Read the article

  • Orphaned SQL Recordsets/Connections with IIS

    - by Damian
    I have an IIS 6 site running on Windows 2003 Server x86 with MS SQL2005 Enterprise edition running ASP Classic (no choice). The site runs very fast with about 8000 page views per hour. All of my SQL tables are indexed and I have used the profiler to check my queries, the slowest of which is only about 10-15ms. I have autoshrink disabled, autogrow is set to 250mb, database is 2gb with 800mb of free space. My problem is that every now and then the site will slow to a crawl for no reason. Pages that just have a simple 'connect to databse and increment a hit counter' work ok, but more SQL intensive pages that normally execute in about 60ms take 25,000ms to run. This happens for about 30 seconds and then goes away. I was having an issue with orphan recordsets and connections due to the way I was releasing them. I have fixed this up and the issue is much better, but I am still getting them. Is there a way with permon, etc. to track when SQL Server or Windows closes these Orphan connections? At least if I can monitor the issue I will know if I am making progress or if I am even looking at the right things. Is there anything else I might be missing? Thank you!

    Read the article

  • Spreadsheet application that can handle big data OS X

    - by Peter
    I've been working with Excel for quite a while for some statistical analysis that I do regularly. The size of the data that I'm working with has gotten much larger as of late, however. The layout of the databases in question is quite simple, usually just three rows which includes a UNIX timestamp, and EST value, a proprietary numeric value and finally an average of the rows that have a timestamp +/- 1000 that row's timestamp (little AVERAGEIFS() formula). That formula and the EST conversion are the only formulas in the sheet. I'm beginning to work with files with 500,000+ rows. Running the average formula down the entire row takes forever. The end result is the production of print-worthy graphs. I'm looking for either a UNIX CL utility or separate spreadsheet/database application that can handle this amount of data without melting my CPU or making me wait an hour. Is there anything out there? TL;DR: Simple excel sheet with over half a million rows is getting too slow to work with. OS X alternatives?

    Read the article

  • Creating a bootable USB drive from a distro split over two DVD ISOs

    - by Kev
    I am searching and not finding the right way to do this. Please note, I don't think I'm trying for anything strange here. I just want to make a bootable USB stick of a single OS that happens to be larger than one DVD and happens to be larger than FAT32 will allow for in a single file. On our slow connection I spent a long time downloading CentOS 5.9's two DVD ISOs: CentOS-5.9-x86_64-bin-DVD-1of2.iso (4.4 GB) CentOS-5.9-x86_64-bin-DVD-2of2.iso (718 MB) I have a USB stick that I want to somehow get these two ISOs on. Since the first one is 4.4 GB, I can't use ISO2USB because it insists on FAT32. I cannot find an alternative that lets you specify more than one ISO image--of the same distro, I'm not trying for some fancy multi-boot thing--to put on the same stick. I guess I should have downloaded the CD ISOs, but I thought I was "saving time" because then I wouldn't have as many files to run through the md5 checker. There's no IMG file of the whole thing (only a net install version, which I don't want--I want to pre-download everything) otherwise I would've gone for that. So, given that I have these two DVD ISOs, how can I get them on a stick that will boot and make use of both of them properly to install CentOS somewhere? Again, I don't think this is anything out of the ordinary, yet I can't find software/docs that seem to support this. Am I stuck re-downloading everything in CD-sized ISOs just to do this? I found this, but it doesn't run on Windows. I am using Windows to prepare the stick.

    Read the article

  • How to reinstall bootloader after migration to SSD

    - by hijarian
    I must say, it was difficult to name this question. Basically, I need to properly reinstall the bootloader on my system, because I already have the working system disks for my OSes. The long story is this: I had the large slow HDD with Windows7 & Debian Wheezy dual-boot on it, perfectly bootable. Then, I ordered the SSD drive and prepared my system partitions to fit onto the much smaller SSD. I wanted the following schema: 128 GB Windows 24 GB / on Debian 86 GB /home on Debian Strange size for /home because there's no such thing as true 256GB disk drive. So, I've prepared such a partitions on my initial HDD and installed the new SSD and then I loaded the GParted live USB (can't remember now how it was really named), and then just copypasted the partitions from HDD to SSD. So, now I have the following partitions across the physical disks: SSD 128 GB copy of original Windows partition 24 GB copy of presumably Debian / 86 GB copy of presumably Debian /home HDD 128 GB Windows 24 GB / on Debian 86 GB /home on Debian ... several other partitions with non-system data ... And the behavior of the system right after the Ctrl+C, Ctrl+V in GParted was as follows: no GRUB, system boots right into the Windows on HDD. In BIOS settings are to boot from SSD first. I managed to create the Debian Testing installation USB and loaded it into the rescue mode, found that it identified my SSD as /dev/sda and installed the GRUB to the /dev/sda. Now my system loads the GRUB which lists both Windows and Debian. From HDD. So, I am now back into initial position. Please, how I should set up the GRUB so it'll load the OSes correctly from SSD? Should I fire up my Debian, fiddle with the GRUB's config and reinstall it again to the same place (at SSD)?

    Read the article

  • Why isn't my phone charging with some micro usb cables?

    - by Jacxel
    I ordered 3 microUSB cables over ebay. My phone was at about 50% battery and I wanted to use it as a hot spot for some browsing on my laptop, so I plugged it in, a charging icon appeared on the phone, my laptop showed it as a connected usb device and so I went about my business. About 30 minutes later I checked the phone and to my dismay saw 45% battery. But ahh, I thought, I have been putting the poor little thing under too much pressure, acting as a WiFi hotspot must drain the battery quicker than it can charge via usb, perhaps even using my laptops usb port wouldn't output enough power. Unscathed I continued on and when I was going to bed I plugged the usb cable into a mains adapter and switched everything battery consuming off and content, went to sleep. The next morning I was awoken by my phones alarm which got cut off unexpectedly. I attempted to unlock my phone which showed no more signs of life. Why isn't my phone charging with these new USB cables? For clarity: They transfer data with no problems The phone appears to be charging, showing all the signs and lights it normally would, the cable that came with the phone works as you would expect, so its not a fault with the phone, I think they slow the discharging of the phone, but I could be wrong. Are these just bad quality cables? Is there a way to fix this issue?

    Read the article

  • Hard drive degredation from large memory usage and paging files?

    - by Stephen R
    I've had a question(s) regarding computer degradation going through my head for a while and haven't found many good resources for researching it. 1) First off, when is the virtual RAM/paging file on a hard drive used by Windows? Is it used when the RAM is full? Or does it use the Virtual RAM/paging file as intermediate caching between the RAM and actual hard drive space all the time? 2) If I were to run many applications on my computer at the same time and have a bad habit of doing this for the entire lifetime of the computer, does it use more of the virtual RAM/paging file than if I were to have fewer programs running? Just to note, the RAM never fills up on my computer but it is used heavily. 3) By extension of question 2, if the virtual RAM/paging file is used more heavily, would that result in rapid hard drive degradation? I have seen a pattern among all of the computers that I have owned or used in the past 5 years. I am the kind of person to leave my web browser up with 40 tabs among other programs which will eat up 40% of my memory typically. Over time my computer will slow down, browsers start crashing, programs start seizing up or crashing themselves, eventually the computer becomes essentially unusable. I have been trying to rack my mind to come up with a solution other than to purchase a new PC to have it die on me in the next couple years as well. This is the only thought that has come to mind that might have a simple hardware fix...Windows ReadyBoost...Maybe? I'd like to be able to discuss this so I can learn something about all of the above. Thanks.

    Read the article

  • Ubuntu in failed state after upgrade from 10.04 to 10.10 - How to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: 1. Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a). 2. gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok] 3. gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ... What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

  • Ubuntu 10.04->10.10 in failed state - how to recover?

    - by Harvey
    I was running Ubuntu 10.04 and attempted to upgrade to 10.10. I have a really slow connection (DSL 128kbits/sec) and copying the upgrade files took about 26 hours. I of course let it run unattended. When I came back, I notice the following 3 dlgs: (1) "Could not install the upgrades The upgrade has aborted. Your system could be in an unusable state. A recovery will run now (dpkg -- configure -a)." (2) "gpk-update-icon Distribution upgrades available maverick 10.10 (stable) [more information] [Do no show this again] [Cancel] [Ok]" (3) "gpk-update-icon Security updates available The following important updates are available for your computer: libwebkit-1.0-2-dbg - Web content engine library for Gtk+ - Debugging symbols libcupsimage2 - Common UNIX Printing System(tm) - Raster image library ..." What is the best response to all of this? I went through something similar in an attempted network upgrade from 8.04 to 10.04 and had to reload the unbootable machine fresh from distribution media (all data was lost). I'd like to avoid that here. I have not yet responded to the dialogs, and want to make sure the system is still bootable and not lose my data this time.

    Read the article

< Previous Page | 197 198 199 200 201 202 203 204 205 206 207 208  | Next Page >