Search Results

Search found 11409 results on 457 pages for 'large teams'.

Page 351/457 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Need help finding a program to split PDFs based on text in specific areas

    - by Sean
    I reeeeally need help with this. I work for a University in the admissions office and we get large PDFs of every document that an applicant uploads to us. What I need to do is split these PDFs into separate documents. Each of the separate documents has a head at the top of what type of document it is (Statement of Purpose, Transcript, etc). I found a program a few weeks ago that at first seemed to work great (A-PDF Splitter). It would look for every type of document in a combined PDF (eg. Statement of Purpose), and if it saw that heading it would create a new PDF named "ORIGINALFILENAME-Statement of Purpose". I soon discovered though that the program breaks for no reason on certain PDFs, and I have to take the time to take that PDF out of the queue and start the splitting over again (and we get 250 a day). I contacted their support and they basically told me I was SOL. So please, if you can find me a program to split PDFs into smaller PDFs based on if it finds particular text in a region, then I would be forever in dept to you. If I don't find one soon then I'm going to be spending the entire day splitting PDFs by hand and my boss isn't going to be happy.

    Read the article

  • Unable to access site over HTTPS using self signed certificate

    - by James
    I am developing a REST API which I want to secure with SSL/TLS. I have implemented a large part of the API which I have tested over HTTP, however, I am now at the stage where I want to switch it over to use HTTPS. At the moment the API is hosted on a Windows XP professional SP2 box running IIS 5.1 (development environment only) and I used the SelfSSL.exe tool from the IIS 6.0 Resource Kit Tools to generate a server certificate. I then configured my API to use this certificate which all appeared to work fine as I attempted to connect to my API using HTTP and I get a 403 response saying "... must be accessed over a secure channel...". However, the problem is when I attempt to access the same the API over HTTPS it just appears to hang! As this is a development environment at the moment I don't have a domain name (just a static IP address) and the API is running on port 81. Also (incase it matters) the API is the default site (I replaced it). Any ideas why I can't connect using HTTPS?

    Read the article

  • Synchronize two directories on linux pc

    - by Gab
    I need a distributed filesystem (or a synchronization tool) that is capable of keeping a directory synchronized across 4 pc. My requirements are: offline access (data must be available offline on each pc) preserve execution rights: some files are marked executable on a linux partition. This flag should be replicated. efficient sync strategy: some of my files are 20GB, they are changed quite often, but in very little parts (Virtualbox images). Delta transmissions are welcome. efficient handling of space: no history for files, files shouldn't be copied to temp directories "just in case you break it". it must propagate deletions of files modification can happen in any of the 4 pcs, they should be propagated when other pc are connected. Other specs of my solution are: Sync is over a lan, the total amount of data to be synced is around 180GB, in some ten thousand files. Changes are small, but can happen in big files. At the moment i'm interested in a linux only solution. conflicts either don't happen or are solved with "last one wins" I haven't found any good solution. I've been trying: unison: it is the only one working at the moment, but during the hashing phase it hangs my pc for some minute, disk light steady on. Sparkleshare doesn't handle large files nicely. It keeps an history of all your changes that grows up indefinitely. They promise it will be fixed in next releases, but at the moment it still doesn't fit my needs. owncloud (keeps history of each file i change) coda ? (help! i couldn't set it up correctly!) git-annex assistant transforms all your files in symlinks and mark the original file as read only ("just in case you make a mistake while you modify it"!). Before you edit a file you have to issue a special command "git-annex unlock", that creates a local copy of the file, and you have to remember to lock it again if you want it synchronized. What to try next?

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • Suddenly getting lock timeouts with MySQL

    - by Marc Hughes
    We've got a web app hosted on Amazon Web services. Our database is a multi-az RDS MySQL server running 5.1.57 and 3-4 app servers talk to it. Today, we started seeing a lot of errors along the lines of "Lock wait timeout exceeded; try restarting transaction" - almost 1% of POST requests are seeing this. There have been no modifications to the code running on the site. There have been no schema changes. We haven't had a big spike in traffic. I've been looking at the processes running, and none seem out of control. I tried scaling our RDS instance from a small to a large, with no effect. Two days ago, Amazon had some outages. As part of the recovery from that, our RDS server, and our app servers ended up in different availability zones, but all within the same region. But yesterday, everything was fine so I'm not convinced that's related. The lock timeouts are in different types of requests and occur in different InnoDB tables. I have noticed the number of open connections jumped when we started seeing problems, but they may be a symptom and not a cause. What are my next steps in debugging this?

    Read the article

  • Why would I need a firewall if my server is well configured?

    - by Aitch
    I admin a handful of cloud-based (VPS) servers for the company I work for. The servers are minimal ubuntu installs that run bits of LAMP stacks / inbound data collection (rsync). The data is large but not personal, financial or anything like that (ie not that interesting) Clearly on here people are forever asking about configuring firewalls and such like. I use a bunch of approaches to secure the servers, for example (but not restricted to) ssh on non standard ports; no password typing, only known ssh keys from known ips for login etc https, and restricted shells (rssh) generally only from known keys/ips servers are minimal, up to date and patched regularly use things like rkhunter, cfengine, lynis denyhosts etc for monitoring I have extensive experience of unix sys admin. I'm confident I know what I'm doing in my setups. I configure /etc files. I have never felt a compelling need to install stuff like firewalls: iptables etc. Put aside for a moment the issues of physical security of the VPS. Q? I can't decide whether I am being naive or the incremental protection a fw might offer is worth the effort of learning / installing and the additional complexity (packages, config files, possible support etc) on the servers. To date (touch wood) I've never had any problems with security but I am not complacent about it either.

    Read the article

  • h264 inside FLV container vs. MP4 container?

    - by Gotys
    I am developing a tube site, and currently having issues with h264 format . By looking at youtube, I noticed they are putting their hi-def videos into mp4 container, so logically I did the same. Next, I installed mod_h264_streaming for lighttpd to make streaming and timeline-scrubbing work. Problem is, that large files (500mb+ at somewhat high resolution) take for EVER to even start buffering ( I read the flowplayer or other flash players need to download metadata first) . I moved the xmov atom to the front of the file with MP4Box (i tried qt-quickstart too) , and the problem didn't go away. Next I read online I need to interleave audio tracks, so I did that too. No change in slowness. So I tried putting the same exact h264 movie into an FLV container, and the playback buffering starts almost instantly - no slowness. So what am I missing here? Why would I choose MP4 container with mod_264_streaming module , which seems super-slow over a regular FLV container with lighttpd's built-in mod_flv_streaming ? Obviously many websites pick mp4 container , but I fail to understand why ? And as a side question - I tried using HTML5's VIDEO tag to try the same h264 MP4 movie, and the scrubbing is LIGHTING FAST! I looked into lighttpd's log file, and i noticed taht Flash Players append video.mp4?start=234 each time timeline is scrubbed, wheres HTML5's video tag does no such thing . Is this some sort of limitations of Flash ? Why Can't flash streaming be same fast as HTML5 streaming? Thanks to ALL who can help. I very much appreciate this community.

    Read the article

  • Simplest DNS solution for remote offices

    - by dunxd
    I look after a bunch of remote offices that connect via VPN - a Cisco ASA 5505 in each office acts as Firewall and VPN end point. Beyond that we keep things as simple as possible in the offices to minimise the support burden. We don't have any kind of server except in offices large enough to justify having someone dedicated to IT. Basically there is the ASA, some computers, a network printer and a switch. One of the problems I am seeing in a lot of offices is that DNS requests looking up hosts inside our network often fail - I'm assuming timeouts due to the offices internet connection (they are all in developing world countries) having some sub-optimal qualities (e.g. high latency caused by VSAT segments, or packet loss. The obvious solution to this is to have some sort of local DNS service that can serve local requests - so I think it would need to do zone transfers from our Microsoft Windows 2008 R2 DNS servers at HQ. However, simply installing Windows Servers in each office is both expensive, and creates a support burden. This got me thinking about pfsense/m0n0wall on embedded devices - those can act as a DNS server, and could be configured at HQ and sent out as just something that needs to be plugged into the network and can then be forgotten about by the staff locally. Maybe there are some alternatives to the ASA 5505 that include some DNS functionality. Has anyone here dealt with the problem, either using some kind of embedded device, or found some other solution? Any gotchas or reasons to avoid what I have suggested?

    Read the article

  • Reading log files from web application

    - by Egorinsk
    I want to write a small PHP application for monitoring logs on a Debian server, including syslog logs and Apache/PHP messages. The problem here is that Apache user (www-data) has no access to /var/log directory. What would be the best way to grant an access to logs for PHP application? Let's assume that log files can be really large, like hundreds of megabytes. I have some ideas: Write a shell script that would be run via sudo and tail last 512 Kb of log into a separate file that can be read by application - that's ineffective, because of forking a new process and having to read data twice Add www-data to adm group (that can read logs) - that's insecure Start a PHP process via cron every minute to read logs — that's not very good, because it doesn't allow real-time monitoring. Also, this script will be started even when I don't read logs, and consume CPU time (server is in the cloud, and I'll have to pay for it) Create a hardlink for all log files with lowered permissions - I guess, that won't work because logrotate could recreate log files and they'll change inode number. Start a separate nginx/Apache server under privileged user that may read logs. Maybe anyone got a better solution?

    Read the article

  • Index a low-cost NAS on Windows 7

    - by JcMaco
    Has anyone found a way to index the files stored on a Networked Attached Storage on Windows 7 so that the files can be available in Windows Search and Libraries? I am referring to the cheap and available NAS like the Western Digital My Book series that use an embedded linux server. Similar question: http://windows7forums.com/windows-7-networking/6700-indexing-nas-drive-libraries.html EDIT Windows help proposes to make the files stored on the NAS available offline. This is obviously not a good solution if the NAS has more data than what the client can store. If the folder is on a network device that is not part of your homegroup, it can be included as long as the content of the folder is indexed. If the folder is already indexed on the device where it is stored, you should be able to include it directly in the library. If the network folder is not indexed, an easy way to index it is to make the folder available offline. This will create offline versions of the files in the folder, and add these files to the index on your computer. Once you make a folder available offline, you can include it in a library. When you make a network folder available offline, copies of all the files in that folder will be stored on your computer's hard disk. Take this into consideration if the network folder contains a large number of files.

    Read the article

  • What USB key would you recommend using for running a Windows 7 VM off of?

    - by Darryl Hein
    Because I can't find a good PHP editor for OSX, I develop in Windows with PhpEd. At the moment, my development time is split between a desktop and a laptop. To partially solve the problem of having 2 different environments, I have installed a virtual machine (through Virtual Box) and put the hard drive file on an external hard drive. At the moment, I've been connecting it through Firewire 800. I have 2 problems with this setup: (1) The hard drive is fairly large so to carry the laptop and hard drive I pretty much require a backpack. (2) The hard drive requires quite a bit of power and therefore reduces the battery life (by about 40%). My thought is to move the VM hard drive onto a USB key. I realize it will be slower, but as I'm just using it for PHP development, there isn't a lot of disk activity in the VM. The only really intense time is boot up, otherwise, it just about sits idle. Do anyone have any suggestions on a USB key to use for the VM? It would need to a minimum of 32GB.

    Read the article

  • Using awk to split text file every 10,000 lines

    - by Sneaky Wombat
    I have a large gzip'd text file. I'd like to something like: zcat BIGFILE.GZ | awk (snag 10,000 lines and redirect to...)|gzip -9 smallerPartFile.gz the awk part up there, I basically want it to take 10,000 lines and send it to gzip and then repeat until all lines in the original input file are consumed. I found a script that claims to do this, but when I run it on my files and then diff the original to the ones that were split and then merged, lines are missing. So, something is wrong with the awk part and I'm not sure what part is broken. Here's the code. Can someone tell me why this doesn't yield a file that can be split and merged and then diff'd to the original successfully? # Generate files part0.dat.gz, part1.dat.gz, etc. # restore with: zcat foo* | gzip -9 > restoredFoo.sql.gz (or something like that) prefix="foo" count=0 suffix=".sql" lines=10000 # Split every 10000 line. zcat /home/foo/foo.sql.gz | while true; do partname=${prefix}${count}${suffix} # Use awk to read the required number of lines from the input stream. awk -v lines=${lines} 'NR <= lines {print} NR == lines {exit}' >${partname} if [[ -s ${partname} ]]; then # Compress this part file. gzip -9 ${partname} (( ++count )) else # Last file generated is empty, delete it. rm -f ${partname} break fi done

    Read the article

  • Adding subnet to a vsphere with single vcenter and esxi host

    - by Ilya Rakhlin
    Let me start of by saying that I do not specialize in networking, I am in the process of adding additional VMs to a testing environment and wanted some recommendations. In this case I am running a single ESXI 5.1 host and a single Vcenter management server. The problem is, I need another range of IP addresses added to the existing setup; hopefully without reconfiguring everything. Currently the esxi host is configured to IP: 192.168.100.200, gateway: 192.168.100.1 and subnet: 255.255.255.0. All of the VMs are running some version of linux with hard coded IP addresses in that range, and using that subnet. The VMs I am about to deploy I want to be on the 192.168.101.X network. Is it possible to add an additional subnet to this existing system that will also communicate with the current subnet? The esxi host has 6 physical NICs but only one connected as it is only a testing system; not sure if that matters. Are there any other ways to accomplish this hopefully without restarting or at least reconfiguring the IP addresses for each VM? Reason: Due to the configuration of the VMs to run the applications that we need I am using a large amount of the current IPs in the potential range (mostly VIPs). I will be setting up a new version of this “environment” while keeping the old one, thus potentially running out of IP addresses.

    Read the article

  • Slow Local Network, Windows 7, Snow Leopard, WiFi/Wired

    - by WerkkreW
    I am experiencing really poor local network performance in my home. I was recently using a Linksys WRT54G Router with DD-WRT on it, and a couple comparable Linksys-G PCI cards for connectivity but decided to upgrade hoping it would help with my performance issues. The computers in my house are connected as follows: Comcast Business Class Commercial 25mbps/10mbps (Verified) D-Link DGL-4500 Wireless N Router Windows 7x64 - D-Link DWA-552 Wireless-N Windows 7x64 - D-Link DWA-552 Wireless-N Mac Mini 10.6.2 - AirPort Extreme N Playstation 3, Hard Wired Xbox 360, Hard Wired Essentially the problem is very specific. Web browsing and uploading/downloading files from the internet is fine, more than fine. But if I want to say, Stream a video from one of my Windows 7 computers to my PS3, or copy a large video file between either of the PC's or the Mac, I get a consistent 500-900Kbps throughput at the high end. If I open my network browser, or try to browse my homegroup the response time is horrible. Both of my Windows computers are showing Strong wireless signals with a connection speed of 300Mbps. I know I can never expect to achieve anything near those speeds, but 500Kbps? Here is what I have tried so far: Enabled Single mode N-only and N/G Only on router WPA2 with AES Encrpytion Disabled "Remote Differential Compression" in Windows 7 Disabled TCP "Auto-Tuning" Used other software for file copies such as "Teracopy" I am at the end of my rope. Unfortunately I live in a 75 year old home with plaster walls, so hard-wiring my entire house isn't really an option I can handle right now. Any ideas to help me get decent speed when transferring files across my network would be greatly appreciated.

    Read the article

  • Computer sponteously reboots when doing heavy file copy to/from disk

    - by Mark Hosang
    I've been fighting with this problem for the last 3 weeks where my machine will just instantly reboot. No BSOD, and when i checked the event log all that was reported was the generic "Kernal-power" error with the detailed information pointing to a hard crash. This is a machine that was working for 18 months before these crashes started happening. When they started happening is after I added 3 HDs in a RAID-5, upped the memory to 12gb, moved to a new house, added a SSD and added about 5 case fans. I have thus eliminated the RAID, and determined that the SSD was not the cause (because it was still crashing even though the ssd wasn't connected). I've run memtest several times over night with no memory problems showing up. I've run IntelBurnTest to max out the cpu to see if it was a heat issue and at full tilt after 20 min it was only at 85C and the machine didn't crash. I also took a look at the voltages during this test, with a screenshot at the bottom of this post I've ruled out a software issue by reinstalling windows 7 ultimate x64 a total of 5 times, but even during that the install it crashes. Happens sometime during file copying at the beginning, or during uncompressing files, or sometimes during running windows update. The only discernible pattern i can see is that it seems to crash when hard disks might be spinning up or when they are accessed heavily from large file transfers. My current guess is that it is probably an issue with the MB, PSU or the power coming through the outlet. Any suggestions of what i could try to troubleshoot or what may be wrong? Specs PSU: Seasonic M12 700w Mem: 12gb CPU: i7-920 with stock heatsink MB: Asus P6T HDs: 3 green WD and 1 Corsair force 3 120b with 1.3.3 firmware Running full tilt voltages Idling Voltages

    Read the article

  • Roaming Profiles & Redirected Folders - storage consumption? offline files and caching?

    - by Ben Swinburne
    I understand the concepts of both roaming profiles and folder redirection and have used both separately before. I am about to set up a network from scratch and would ideally like to use both for the following reasons primarily Roaming profiles allow users to log on to any machine and have their profile Redirected profiles allow users to have their My Documents and Desktop etc backed up without the need to log off at the end of the day. The servers can run their backups overnight and there are no missing files due to the user not logging off. Redirected profiles largely alleviate the slow log in times caused by large profiles. My question is if some of the folders are redirected and therefore not part of the roaming profile what happens on machines which truly roam (i.e. laptops)? If there's offline files or a cache does this mean that the problem whereby a user has to log off comes back? By having them both enabled, is there any duplication i.e. if I have a users$ share and a profiles$ share would I have Desktop twice for example?

    Read the article

  • Should I embed the sRGB color profile in JPEG files?

    - by basic6
    I have a large (growing) collection of scanned images. They are TIFF files, mostly 48 bit with the Adobe RGB color space. This color profile is integrated in the files. When such a file is opened in IrfanView (with plugins), it says (Image - Information) Adobe RGB 1998. "Normal images", like the JPG files from a digital camera, do not (necessarily) have a color profile integrated in the file. I understand that it's necessary to include the Adobe RGB profile in an image file which uses the Adobe RGB space, so the color values can be interpreted correctly. Here's a test image with a completely different color profile, programs that ignore the included profile (like MSIE8 or Gwenview) will render it as sRGB (?): I'm planning to convert my TIF files to JPG, so I'm wondering if there's anything wrong with using IrfanView that would save them as sRGB without embedding the sRGB profile. I've heard that images should always be saved with the color profile included. Since every image seems to be interpreted as sRGB by default (by software without color management), I don't understand why the sRGB profile should be included?

    Read the article

  • How to handle files that don't need version control in mercurial

    - by richardh
    I am new to mercurial, and for the most part do LaTeX reports and statistical calculations in R using .csv and/or .sqlite files. Re LaTeX, all I really care is the .tex file. Re R, I don't need version control on the .csv or .sqlite files because they are static. When I do 'hg add' for a repo with a .csv and/or .sqlite file, I get a warning like: rev2.sqlite: up to 3070 MB of RAM may be required to manage this file (use 'hg revert rev2.sqlite' to cancel pending addition) So I revert and subsequently use adds like hg add -X *.sqlite. I guess I really have two questions: (1) Should I ignore these warnings? Because these large files are static, can I just add to the repo knowing that the diff files will always be empty and not worry about wasted resources? (2) If I should keep excluding these files from the repo, is there away that I can fix this option? I.E., add to my .hgrc file something that always appends an option like -I *.tex -I *.R to my 'hg add' commands? Thanks!

    Read the article

  • 7zip many files from different folders?

    - by mafutrct
    I would like to add a large number of files with different names from different folders to a single 7zip archive using 7za.exe. This should be simple, but it turned out to be a major pain. I created a file that contains the paths (7za -a @list.txt) but once there are too many (~100) files, it fails. Apparently the content of the argument file is pushed onto the command line buffer, which is far too small (the number of files to add is 1m). Splitting the process up by adding the files one by one is not feasible due to the way 7za works: When adding the next file, it creates a copy of the archive, adds the file to the copy and finally replaces the original. This is terribly slow once the archive gets to a couple 100MB in size. So far I am using a combination of the two approaches by adding a dozen files each time in a loop, but it is an unreliable hack and still very slow. Is there a better way to do it? I tried to use 7zip wrapper DLLs (I'm a C# programmer), but none of them worked reliably and I was repeatedly suggested to just use 7za instead.

    Read the article

  • Caching/preloading files on Linux into RAM

    - by Andrioid
    I have a rather old server that has 4GB of RAM and it is pretty much serving the same files all day, but it is doing so from the hard drive while 3GBs of RAM are "free". Anyone who has ever tried running a ram-drive can witness that It's awesome in terms of speed. The memory usage of this system is usually never higher than 1GB/4GB so I want to know if there is a way to use that extra memory for something good. Is it possible to tell the filesystem to always serve certain files out of RAM? Are there any other methods I can use to improve file reading capabilities by use of RAM? More specifically, I am not looking for a 'hack' here. I want file system calls to serve the files from RAM without needing to create a ram-drive and copy the files there manually. Or at least a script that does this for me. Possible applications here are: Web servers with static files that get read alot Application servers with large libraries Desktop computers with too much RAM Any ideas? Edit: Found this very informative: The Linux Page Cache and pdflush As Zan pointed out, the memory isn't actually free. What I mean is that it's not being used by applications and I want to control what should be cached in memory.

    Read the article

  • Can any iSCSI NAS appliance replicate / clone a LUN to an external drive?

    - by Boden
    I would like to backup using Windows Imaging to some kind of NAS appliance. I believe this will require the NAS to support iSCSI. I would then like the appliance to support the replication of the iSCSI LUN to an external eSATA or USB disk connected directly to the appliance. I've found plenty of NAS appliances that can do iSCSI and replicate to an external drive, but none that I've found thus far can do both at once. That is, the devices can do iSCSI, but then the replication feature doesn't work. The idea here is to backup to an appliance located in a secure office far away from the server room. Offsite backups to external hard drive could be managed from the appliance. The benefits of such a setup would be: 1) very unlikely that fire or random theft would affect both server-room backup and "remote" backup appliance 2) offsite backups could be managed by multiple trusted people without granting access to server room 3) Windows imaging provides poor man's deduplication, so each backup volume can contain a decent backup history. I understand why this would be a non-trivial thing to implement, but I'm wondering if such a thing exists? Preferably a tabletop, low to medium cost device. Alternative solutions welcome. NOTE: I'm backing up very few but very large files, so file replication is not a good option.

    Read the article

  • How to reinstall bootloader after migration to SSD

    - by hijarian
    I must say, it was difficult to name this question. Basically, I need to properly reinstall the bootloader on my system, because I already have the working system disks for my OSes. The long story is this: I had the large slow HDD with Windows7 & Debian Wheezy dual-boot on it, perfectly bootable. Then, I ordered the SSD drive and prepared my system partitions to fit onto the much smaller SSD. I wanted the following schema: 128 GB Windows 24 GB / on Debian 86 GB /home on Debian Strange size for /home because there's no such thing as true 256GB disk drive. So, I've prepared such a partitions on my initial HDD and installed the new SSD and then I loaded the GParted live USB (can't remember now how it was really named), and then just copypasted the partitions from HDD to SSD. So, now I have the following partitions across the physical disks: SSD 128 GB copy of original Windows partition 24 GB copy of presumably Debian / 86 GB copy of presumably Debian /home HDD 128 GB Windows 24 GB / on Debian 86 GB /home on Debian ... several other partitions with non-system data ... And the behavior of the system right after the Ctrl+C, Ctrl+V in GParted was as follows: no GRUB, system boots right into the Windows on HDD. In BIOS settings are to boot from SSD first. I managed to create the Debian Testing installation USB and loaded it into the rescue mode, found that it identified my SSD as /dev/sda and installed the GRUB to the /dev/sda. Now my system loads the GRUB which lists both Windows and Debian. From HDD. So, I am now back into initial position. Please, how I should set up the GRUB so it'll load the OSes correctly from SSD? Should I fire up my Debian, fiddle with the GRUB's config and reinstall it again to the same place (at SSD)?

    Read the article

  • Super slow opening my downloads folder

    - by Mark
    I have an exe file in my download folder that I half downloaded through utorrent (it's not piracy, a legit file from people who use bittorrent to distribute large files). I think I tried to open it while it was still sharing, that is, did not stop the upload. That actually froze my computer. When I restart in utorrent I set the file to be deleted. Unfortunately even though utorrent doesn't see that file anymore, it's still visible in my download folder. Whenever I try to open my download folder it literally takes 10 minutes or more. It opens, but is empty and the blue progress bar needs a long time to complete. After completion I can use the download folder normally, but opening and closing things in that folder takes a long time. I see the exe that I tried to download. I tried to delete it. But it was taking so long 30+ minutes that I eventually just hit cancel. That doesn't even work, and it was slowing down the computer. Couldn't figure out how to stop the delete so I just pulled the plug. Should I just forget about that dl folder and set a new one? Is there something I can do? Thanks.

    Read the article

  • What are the replacement options for an IDE hard disk for a DOS based system?

    - by dummzeuch
    I have got a few "embedded" systems running MSDOS 6.2 which boot from and store data to IDE hard disks. Since these drives are nearing their end of life, the question arises how we can replace them. The requirements are: DOS must be able to install and boot from these drives. They must be able to sustain heavy (mostly) write access. If possible, they should be able to survive moderate vibrations (not too bad since the current hard disks have survived several years of that) I considered the following options so far: other ide hard drives: Unfortunately modern IDE drives are too large so DOS cannot boot from them even if I create small partitions. Older IDE drives are just that: old, so they are probably not the most reliable ones any more. SSDs: There are a few SSDs with IDE interface available. I have not yet tried them. Does anybody have any experience with them? They look like the ideal replacement provided that DOS can boot from them and that writing speed does not deteriorate too much (the old hard disks are no race cars either). Compact Flash: There are adapters for using CF with IDE controllers and they work fine. DOS can boot from them and they have no problems at all with vibrations. What I am not sure about is their durability. DOS uses FAT so some very few sectors are written every time the medium is being written to. IDE to SATA converters: I have no idea whether they are any good. Has anybody tried them? It might be an option to use one of these to connect an SATA SSD to the system. Are there any alternatives that I have missed? (We are working on replacing these systems, but it will still take a few years.)

    Read the article

  • Different external ip addresses from different sites

    - by user630286
    My router is ClearOS 6(Centos 6). In my router, I have two external (internet) network connections from two ISP's. The primary connection is eth2 connected to a cable modem and the second one is ppp0 connected to a dsl modem. I have assigned eth2 as the primary connection (with a high metric value). In fact this is done through clearos's multiwan web interface. I have a test in my Nagios to monitor whether the primary connection. This connection is done based on the result of curl ifconfig.me But it seems that ifconfig.me is always giving the ip address of my secondary connection. I tested it through a browser. Yes ifconfig.me gives the secondary internet's(ppp0) ip address. But whatismyipaddress.[com|org] give my primary ip address (eth2). I checked the default route on the router through ip route list 0/0 which also shows the primary connection (eth2) as the default route. The traceroute www.google.com and traceroute ifconfig.me both seems to trace through the primary connection (eth2). As our secondary internet connection has only got a limited download, I don't want to end up having to pay a large sum at the end of the month. Has somebody got an idea why the ifconfig.me shows my secondary address? What is the best way to ensure that my router(and thus the lan) use the right internet connection.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >