Search Results

Search found 68828 results on 2754 pages for 'knapsack problem'.

Page 526/2754 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • Avoid unwanted path in Zip file

    - by jerwood
    I'm making a shell script to package some files. I'm zipping a directory like this: zip -r /Users/me/development/something/out.zip /Users/me/development/something/folder/ The problem is that the resultant out.zip archive has the entire file path in it. That is, when unzipped, it will have the whole "/Users/me/development/anotherthing/" path in it. Is it possible to avoid these deep paths when putting a directory into an archive? When I run zip from inside the target directory, I don't have this problem. zip -r out.zip ./folder/ In this case, I don't get all the junk. However, the script in question will be called from wherever. FWIW, I'm using bash on Mac OS X 10.6.

    Read the article

  • how to wrap the command1 return strings with single/double quotation marks (\'or\") to feed to the next command2?

    - by infantcoder
    For example, I want to use mplayer to play the music of several dirs, type like this in bash: $find './l_music/La Scala Concert 03 03 03' './l_music/Echoes The Einaudi Collection' './l_music/Ludovico Einaudi - The Royal Albert Hall Concert [2 CD] (2010)' -name '*.mp3' | xargs mplayer Well, You Know, the find command return path, which dir and file always have space, the pipe right command mplayer do not accept those mp3 path. I think that if wrap the find return strings with single/double quotation marks (\'or\") to feed to mplayer, the problem will be solved. But how can I do to solve the problem just use bash command, do not use bash or perl scripts, while can give me one perl line command use Perl Command-Line Options.

    Read the article

  • Programs minimized for long time takes long time to "wake up"

    - by bart
    I'm working in Photoshop CS6 and multiple browsers a lot. I'm not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar - it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it's slow, unresponsive and even sometimes totally freezes for minute or two. It's not a hardware problem as it's been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me - does it also happen to you? I guess OSes somehow "focus" on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let's say, Photoshop, so it won't slow down after long period of inactivity?

    Read the article

  • copying user profile on windows 7

    - by SwissCoder
    Is there a tool or a trick to easily duplicate a windows profile? My problem is that I have a local user profile and I like to copy that for another user. Additionaly that profile was created locally when a domain-user logged in, and I like to create a copy of that profile for a non-domain-user. Hope it's clear what my problem is. Thank you for reading! I've just seen there is a similar question: Copy Windows 7 profile from one domain user to another Now I like to know if it is possible to simply change the user-profile's Name and Password. Is this somehow possible?

    Read the article

  • can't open web pages in google chrome

    - by javapowered
    When I type in google chrome "google.com" I receive such error: 404. That’s an error. The requested URL /cgi-bin/authme?s=0.2068704296834766 was not found on this server. That’s all we know. It seems I'm redirected somehow by some reason. Probably lenovo notebook problem? At the same time IE works fine. This problem reproducable with other webpages too, for example i'm redirected from vkontakte.ru to such web page which doesn't exist too http://www.vkontakte.ru/cgi-bin/authme?s=0.7140683813486247 Also I see such message when chrome starts "Lütfen yönlendirilirken bekleyin", this is very strange because I only have russian and english languages installed in system.

    Read the article

  • Application base [my path to install here] for host [hostnamehere] does not exist or is not a directory

    - by Hyposaurus
    I am trying to start a new installation of tomcat7 (on arch Linux). I have everything configured how I normally would but I am running into the problem described in the title. This means that tomcat starts but nothing in that host gets deployed. My server and host file: <Host name="localcity" appBase="/home/gary/Sites/localcity/" autoDeploy="true" unpackWARs="false"> </Host> And the directory it is in drwxrwxr-x 4 doug tomcat 4096 Apr 15 11:52 . drwx------ 33 gary users 4096 Apr 15 20:40 .. drwxrwxr-x 2 tomcat tomcat 4096 Apr 15 20:40 localcity drwx------ 2 gary users 4096 Mar 31 10:10 lod It looks like other installations I have, but I am not sure what the problem is.

    Read the article

  • Cheap windows shared hosting service

    - by Elangovan
    Hi I have recently purchased a domain from go daddy, now I am looking for a cheap windows hosting. Following are my requirements Shared Windows hosting Should be cheap in price Should have at least one SQL server Db and one mysql db. Should support atleast asp.net 3.5 and php Will be good if it has support for asp.net mvc (no problem even if it is not available also) Should be able to install third party blog sites. Bandwidth, total space and performance are not very important. Silverlight is also an added advantage (no problem even if it is not available also). There should be no advertisement or banner added by the hosting company in the site. Should have support for subdomains

    Read the article

  • Ngingx wont start with fastcgi_split_path_info" error

    - by Ke
    Hi, I heard that nginx is faster and since im on a VPS with low ram i thought id try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 Anyone know what might be causing the problem? I cant find any reference to the problem on Google Also I have heard conflicting things about Nginx vs Apache. Some say use one, some say the other. Im using allsorts such as rewrite rules, proxies etc. Am I setting myself up for a fall by using Nginx? If I go for apache, does anyone know of anyway to tweak it so that it performs better on a low ram VPS? Cheers Ke

    Read the article

  • USB drivers installed twice

    - by stupid-phil
    Hello, I have a problem with usb drivers on Windows 7 64bit When I start up my laptop, the mouse I have plugged into a usb port does not work. I have to open the device manager, where there are two "Standard Enhanced PCI to USB Host Controller" entries. One is marked as faulty (yellow triangle). I uninstall this. Scan for changes. Then it re-appears, but not marked as faulty. And the mouse works. I have to do this every time I reboot. A colleague has the same problem. Both DELL laptops, but different models. I've tried uninstalling all USB controllers, but after a reboot, they are all reinstalled with the faulty entry. This only started happening in the last few weeks. Any help appreciated. It's driving me nuts. Thanks Phil

    Read the article

  • Ngingx wont start with fastcgi_split_path_info" error

    - by Ke
    Hi, I heard that nginx is faster and since im on a VPS with low ram i thought id try it out. I got through this tutorial http://www.howtoforge.com/installing-php-5.3-nginx-and-php-fpm-on-ubuntu-debian But I now get the following error: unknown directive "fastcgi_split_path_info" in /etc/nginx/sites-enabled/default:28 Anyone know what might be causing the problem? I cant find any reference to the problem on Google Also I have heard conflicting things about Nginx vs Apache. Some say use one, some say the other. Im using allsorts such as rewrite rules, proxies etc. Am I setting myself up for a fall by using Nginx? If I go for apache, does anyone know of anyway to tweak it so that it performs better on a low ram VPS? Cheers Ke

    Read the article

  • Disable caching on a specific Classic ASP page

    - by David Brunelle
    Hi, Not sure if I really am on the right forum, but if not, just tell me. I have a page that is coded in Classic ASP which is used to send email. We are currently having a problem in which the page seem to be sent twice sometime. Upon checking, we found out that those who have this problem are coming from big organisation, so it was suggested that their server might cache the file for some reason. I would like to know, is there a way in HTML (or Classic ASP) to prevent that from happening? Or is it in IIS that we must set this up? Thanks,

    Read the article

  • Internet Explorer 6 Encountered An Error ntl.dll Issue on XP and Win2000

    - by Gary B2312321321
    As we have some PC's still running Windows 2000 and this is beyond our control we use IE 6 so we can keep the IE platform standard. Recently on some XP and Win2000 PCs IE has been crashing with the Encountered a Problem and Needs to close do you want to send a message to Microsoft error when entering some sites. This happens even after a format reinstall with minimal software load (Office 2000, some avaya software, kaspersky, tapiex.dll). The module pointed to in the error report is ntl.dll. There are lots of reports over the net of the problem but has anyone resolved this issue? (Latest IE6 updates are installed). Also please note there were no 3rd party IE addons, spyware, viruses or adware Hope someone can help.

    Read the article

  • "No route to host" with ssl but not with telnet

    - by Clemens Bergmann
    I have a strange problem with connecting to a https site from one of my servers. When I type: telnet puppet 8140 I am presented with a standard telnet console and can talk to the Server as always: Connected to athena.hidden.tld. Escape character is '^]'. GET / HTTP/1.1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>400 Bad Request</title> </head><body> <h1>Bad Request</h1> <p>Your browser sent a request that this server could not understand.<br /> Reason: You're speaking plain HTTP to an SSL-enabled server port.<br /> Instead use the HTTPS scheme to access this URL, please.<br /> <blockquote>Hint: <a href="https://athena.hidden.tld:8140/"><b>https://athena.hidden.tld:8140/</b></a></blockquote></p> <hr> <address>Apache/2.2.16 (Debian) Server at athena.hidden.tld Port 8140</address> </body></html> Connection closed by foreign host. But when I try to connect to the same host and port with ssl: openssl s_client -connect puppet:8140 It is not working connect: No route to host connect:errno=113 I am confused. At first it sounded like a firewall problem but this could not be, could it? Because this would also prevent the telnet connection. As Firewall I am using ferm on both servers. The systems are debian squeeze vm-boxes. [edit 1] Even when I try to connect directly with the IP address: openssl s_client -connect 198.51.100.1:8140 #address exchanged connect: No route to host connect:errno=113 Bringing down the firewalls on both hosts with service ferm stop is also not helping. But when I do openssl s_client -connect localhost:8140 on the server machine it is connecting fine. [edit 2] if I connect to the IP with telnet it also is not working. telnet 198.51.100.1 8140 Trying 198.51.100.1... telnet: Unable to connect to remote host: No route to host The confusion might come from IPv6. I have IPv6 on all my hosts. It seems that telnet uses IPv6 by default and this works. For example: telnet -6 puppet 8140 works but telnet -4 puppet 8140 does not work. So there seems to be a problem with the IPv4 route. openssl seems to only (or by default) use IPv4 and therefore fails but telnet uses IPv6 and succeeds.

    Read the article

  • Squirrelmail got error after installation whm

    - by Pleerock
    Just now installed whm/cpanel, created some accounts, created mail account in one of them and entered squirrelmail to check the mail. Unfortunately it gives me an error: Warning: fsockopen() [function.fsockopen]: php_network_getaddresses: getaddrinfo failed: Name or service not known in /usr/local/cpanel/base/3rdparty/squirrelmail/plugins/login_auth/functions.php on line 129 Warning: fsockopen() [function.fsockopen]: unable to connect to localhost:143 (php_network_getaddresses: getaddrinfo failed: Name or service not known) in /usr/local/cpanel/base/3rdparty/squirrelmail/plugins/login_auth/functions.php on line 129 How do I fix it? I don't know exactly, but I read some sites and maybe the problem in dns? I changed ns1.com to ns1.myhost.com in a Basic cPanel & WHM Setup P.S. Im sure that it server configuration problem, not squirrelmail, other mail clients are not working too..

    Read the article

  • Set up ad hoc wireless connection between Windows Vista and Mac OS X

    - by Skarab
    I have the following problem - Windows Vista does not connect to adhoc wireless network created on my Macbook. I have tried to create secured (with 40 bit key) and unsecured network but Windows Vista still has problems to connect. Windows VISTA informs me -- after 5 minutes of attempts - that setting up the connection -- with my adhoc network -- took too much time. My question: do I need to configure some settings on Vista to connect it to my Macbook? Maybe it is a problem with DHCP? Edited: I have tried the other way: http://superuser.com/questions/202890/set-up-an-adhoc-network-in-windows-vista-to-connect-to-and-share-the-internet-con

    Read the article

  • Alternatives for heapdumps creation with higher performance than jmap?

    - by Christian
    Hi, I have to create heapdumps, which works nice with jmap. My problem is, that jmap takes very long to create the heapdump file. Especially when the heap is getting bigger ( 1GB) it is taking too long. One situation as example: When the server gets into trouble with the heapspace, I want to restart it automatically and create a heapdump before the restart. This works, but takes too long to write the heapdump. This way the server is down for too long. The heapdump creation takes longer than one hour. I know about -XX:+HeapDumpOnOutOfMemoryError, but most of the time I can find the memory problem before the exception is thrown by the jvm. Is there an alternative to jmap which writes the heapdumps faster? A special solution for the example above would also be appreciated. This question is a mix between programming and system-administration, but I think I'm at the right place here.

    Read the article

  • Debugging IO limitation

    - by Martin F
    I have a Fedora box with some severe IO limitations which I have no idea how to debug. The server has a Areca Technology Corp. ARC-1130 12-Port PCI-X to SATA RAID Controller with 12 7200 RPM 1.5 TB disks and a Marvell Technology Group Ltd. 88E8050 PCI-E ASF Gigabit Ethernet Controller. uname -a output: 2.6.32.11-99.fc12.x86_64 #1 SMP Mon Apr 5 19:59:38 UTC 2010 x86_64 x86_64 x86_64 GNU/Linux The server is a file server running Nginx with the stub status module enabled, so I can see the current amount of connections. The problem present itself when I have a high number of simultaneous connections in a writing state. Usually around 350, at this very moment it's at 590 and the server is almost unusable and stuck at 230mbit/s. If I run stop and hit 1 to see CPU core usages I have all 4 cores with around 99% io wait, if I run iotop the nginx workers are the only processes producing any read load, currently at around 25MB/s. I have each of the workers bound to their own core. Initially I figured it was just the disks being bugged. But I've run fscheck and smartmontools checks and found no errors. I also ran an iozone test which you can see the result of here: http://www.pastie.org/951667.txt?key=fimcvljulnuqy2dcdxa Additionally, when the amount of connections are low I have no problem getting a good speed. If I wget over the local network it easily hits 60MB/sec. Right now I just tried putting a file in /dev/shm, then I symlinked a file from the public dir to it and used wget over the local network and only got 50KB/s. Also, if I try to cp /dev/shm/test /root/test it quickly copies around 740MB and then slows down HEAVILY. Again with iotop reporting 99% iowait. I'm not really sure how to go about figuring out what the problems are. It could be a natural disk limitation but then the file from /dev/shm ought to transfer so it seems there's a network limit, but that's fine when there's not many connections. Perhaps it's a TCP stack problem but I really have no idea how to check that. Any suggestions on how to proceed with debugging would be very welcome. If additional information is required then let me know and I'll try to get it. Thanks.

    Read the article

  • How can I compact the VHD file with Ubuntu?

    - by AmShegar
    I use windows server 2008r2 with role Hyper-V. The guest system is Ubuntu 12.04 LTC. It is situated on the dynamic virtual hard disk. I want to compact this VHD (The real size is 50 GB, 360 GB on the disk). But I can not do this, because the Ubuntu file system is not NTFS. What do I need (gparted, sdelete, ...) for solving this problem? The main problem is that the filesystem is not NTFS, but ext4.

    Read the article

  • What equipment do real ISP's use?

    - by Allanrbo
    In a dormitory of 550 residents, people often mistakenly set up DHCP servers for the whole network by plugging in their private Wi-Fi routers wrongly. Also recently, someone mistakenly configured their PC to a static IP being the same as that of the default gateway. We use cheap 3Com switches at the moment. I know that Cisco switches support DHCP snooping to solve the DHCP problem, but that still does not solve the default gateway IP takeover problem. What sort of switch equipment do real ISP's use so their customers cannot break the network for the other customers?

    Read the article

  • Mac has become insanely slow : Processes SystemUIServer, UserEventAgent and loginwindow using a lot of memory

    - by SatheeshJM
    I have been using my Mac for for many months without any problem. But recently all of a sudden the Mac became insanely slow. I opened Activity Manager to see what was happening. For three processes SystemUIServer, UserEventAgent and loginwindow, the memory gradually increases and reaches upto 2 GB for each process. This completely hangs up my Mac. I tried the following : 1. Restart Mac 2. Restart Mac in safe mode 3. Manually kill the processes 4. Remove Date and Time from Menu bar(this was supposed to be the problem for the SysteUIServer process's memory according to many users) 5. Removed the externally connected keyboard and mouse(some had suggested this for UserEventAgent's memory) No luck with any of those. The moment I log in, the memory spikes up. Any idea what the hell is happening? Please help.

    Read the article

  • Determining currently-serving files in IIS 7

    - by Nat Papovich
    serverfault showed me this topic, and I think I want to do the same thing, but in IIS, not Apache. I have a "dashboard" application I'm building and I want it to show what files are currently being served by IIS. They'll mostly all be large files. I believe that the ILogScripting COM Interface would have been one good place to start, but it's not available in IIS 7, and it relies on the underlying IIS logs for its data. And therein, I believe, lies my problem. How do I make IIS put in, essentially, two log entries, one as the request begins, and one when the connection is closed? Also, it looks like IIS doesn't "commit" log entries as they're occuring, in "real-time". There's some kind of delay/batch-job. That will cause a problem for me too. Or do I need to do something in isapi instead?

    Read the article

  • Wireless is not currently enabled

    - by ikartik90
    I have a HP Pavilion TX2000 tablet PC with me with windows 7 OS running on it. I used to access the Internet using my D-Link DIR 615 wireless adapter on the tablet and it used to work quite fine until one day, when I hibernated my Windows 7, the wireless went off and the problem seems to persist even after my hard efforts to clear it. I checked if the router works fine, and yes it did as my iPod was still catching wireless signals on the other hand, when I checked mu device manager, I realized that I now had no wireless driver. I checked on HP's website for one, but ironically even they didn't have wireless drivers meant for my tablet for Windows 7. Please help me find a solution to this problem. Further queries will be entertained as frequently as possible. Thanks.

    Read the article

  • Ethernet switch not working

    - by Froskoy
    I've just tried using two different ethernet switches on my network to replace an 8-port Netgear gigabit ethernet switch, which works fine, but doesn't have enough ports for what I need. Computers are connected to a TP-Link TD-8840T router via a switch. They use DHCP for IP address assignment. One switch is a TigerSwitch 6924M, which I'd expect to be difficult to set up, since it is second hand and has an advanced configuration menu, which I can't access without a serial port. However, the second switch that I tried is a new TP-Link TL-SF024, which doesn't appear to have any configuration options, so that can't be the problem. When I say "not working," I mean that although they display that they are connected to a network, they cannot access the internet. For example commands like "ping -c10 google.co.uk" come up with 100% packet loss. What could be causing the problem and how do I fix it?

    Read the article

  • Lan, vpn on Amazon EC2, how to?

    The problem is as follows: I have 2 windows2003 server instances running on the cloud. 1) How can I create a local area network from these 2 instances? 2) Assuming that I want to create a VPN network from these 2 instances, how do I do that? (I'm not very good in networking, therefor the above problem description might be incomplete or not very clear.) A detailed answer or clarification would be praised and appreciated! What I tried: 1) Setting up OpenVPN, but I got lost in the process. 2) Creating a VPN from windows2003 server in the following manner: on instance a): set up a dhcp server; set up an "accept income vpn" connection; with the followin tcp ip settings: obtain an ip from the dhcp server; on instance b): created a new vpn connection, tried to connect to intance A, using the instance A static IP but error 806 was thrown, something relate to a GRE protocol.

    Read the article

  • Mac OS X Lion 10.7.2 update breaks SSL

    - by mcandre
    Summary After updating from 10.7.1 to 10.7.2, neither Safari nor Google Chrome can load GMail. Spinning Beachballs all around. The problem isn't GMail; Firefox loads GMail just fine. The problem isn't limited to Safari or Google Chrome; Other applications also have trouble with SSL: Gilgamesh and Safari. Any program that uses WebKit (Google Chrome, Safari) or a Cocoa library (Gilgamesh) to access the Internet has trouble loading secure sites. The various forums online suggest a handful of fixes, none of which work. Analysis Fix #1: Open Keychain Access.app and delete the Unknown certificate. The 10.7.2 update also prevents Keychain Access from loading. The Keychain program itself Spinning Beachballs. Fix #2: Delete ~/Library/Keychains/login.keychain and /Library/Keychains/System.keychain. This temporarily resolves the issue, and lets you load secure sites, but a minute or two after rebooting or hibernating somehow magically undoes the fix, so you have to delete these files over and over. Fix #3: Delete ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob*. There is a rumor that the new MobileMe/iCloud service ubd is causing the issue. This fix does not resolve the issue. Fix #4: Open Keychain Access, open the Preferences, and disable OCSP and CRL. This fix does not resolve the issue. Fix #5: Use the 10.7.0 - 10.7.2 combo installer, rather than the 10.7.1 - 10.7.2 installer. When I run the combo installer, it stays forever at the "Validating Packages..." screen. The combo installer itself is bugged to He||. I force-quit the installer, ran "sudo killall installd" to force-quit the background installer process, and reran the combo installer. Same problem: it stalls at "Validing Packages..." Recap The only fix that works is deleting the keychains, but you have to do this every time you reboot or wake from hibernate. There is some evidence that ubd continually corrupts the keychain files, but the suggested ubd fix of deleting ~/Library/Application\ Support/Mob* and /Library/Application\ Support/Mob* does not resolve this issue. Evidently, something is corrupting the keychain over and over and over. Also posted on the Apple Support Communities.

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >