Search Results

Search found 26978 results on 1080 pages for 'load testing'.

Page 876/1080 | < Previous Page | 872 873 874 875 876 877 878 879 880 881 882 883  | Next Page >

  • Change the Powershell $profile directory

    - by Swoogan
    I would like to know how to change my the location my $profile variable points to. PS H:\> $profile H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 H:\ is a network share, so when I create my profile file, and load powershell I get the following: Security Warning Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1? [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"): According to Microsoft, the location of the $profile is determined by the %USERPROFILE% environment variable. This is not true: PS H:\> $env:userprofile C:\Users\username For example, I have an XP machine working how I want: PS H:\> $profile C:\Documents and Settings\username\My Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 PS H:\> $env:userprofile C:\Documents and Settings\username PS H:\> $env:homedrive H: PS H:\> $env:homepath \ Here's the same output from the Vista machine where the $profile points to the wrong place: PS H:\> $profile H:\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 PS H:\> $env:userprofile C:\Users\username PS H:\> $env:homedrive H: PS H:\> $env:homepath \ Since $profile isn't actually determined by %USERPROFILE% how do I change it? Clearly anything that involves changing the homedrive or homepath is not the solution I'm looking for.

    Read the article

  • JVM disappeared on Mac OS X Snow Leopard 10.6.8

    - by weisjohn
    I'm working in Eclipse one night, (also using Android's DDMS from the commandline). The next morning, I open the lid... attempt to run Eclipse and get an error. me$ sudo /Applications/eclipse/eclipse JavaVM: requested Java version ((null)) not available. Using Java at "" instead. JavaVM: Failed to load JVM: /bundle/Libraries/libserver.dylib So I then attempt to find out where my JDKs are pointed: me$ ls -la /System/Library/Frameworks/JavaVM.framework/Versions/ total 64 drwxr-xr-x 12 root wheel 408 Nov 16 10:44 . drwxr-xr-x 12 root wheel 408 Sep 7 09:39 .. lrwxr-xr-x 1 root wheel 5 Sep 7 17:07 1.3 -> 1.3.1 drwxr-xr-x 3 root wheel 102 Dec 2 2009 1.3.1 lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.4.2 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.5.0 -> CurrentJDK lrwxr-xr-x 1 root wheel 10 Sep 7 17:07 1.6 -> CurrentJDK drwxr-xr-x 9 root wheel 306 Nov 16 10:44 A lrwxr-xr-x 1 root wheel 1 Sep 7 17:07 Current -> A lrwxr-xr-x 1 root wheel 59 Sep 7 17:07 CurrentJDK -> /System/Library/Java/JavaVirtualMachines/1.6.0.jdk/Contents Everything looks normal so far... me$ ls -la /System/Library/Java/JavaVirtualMachines/ total 0 drwxr-xr-x 2 root wheel 68 Nov 16 10:44 . drwxr-xr-x 5 root wheel 170 Nov 16 10:44 .. Apparently, my virtual machines have been deleted or moved? I'll probably be able to just re-install Java, but does anyone have any insight into why this may have happened or how to prevent in the future?

    Read the article

  • VLAN for WiFi traffic separation (new to VLANing)

    - by Philip
    I run a school network with switches in different departments. All is routed through to a central switch to access the servers. I would like to install WiFi access points in the different departments and have this routed through the firewall (an Untangle box that can captive-portal the traffic, to provide authentication) before it gets onto the LAN or to the Internet. I know that the ports that the APs connect to on the relevant switches need to be set to a different VLAN. My question is how do I configure these ports. Which are tagged? Which are untagged? I obviously don't want to interrupt normal network traffic. Am I correct in saying: The majority of the ports should be UNTAGGED VLAN 1? Those that have WiFi APs attached should be UNTAGGED VLAN 2 (only) The uplinks to the central switch should be TAGGED VLAN 1 and TAGGED VLAN 2 The central switch's incoming ports from the outlying switches should also be TAGGED VLAN 1 and TAGGED VLAN 2 There will be two links to the firewall (each on its own NIC), one UNTAGGED VLAN 1 (for normal internet access traffic) and one UNTAGGED VLAN 2 (for captive portal authentication). This does mean that all wireless traffic will be routed over a single NIC which will also up the workload for the firewall. At this stage, I'm not concerned about that load.

    Read the article

  • FreeBSD's ng_nat stopping pass the packets periodically

    - by Korjavin Ivan
    I have FreeBSD router: #uname 9.1-STABLE FreeBSD 9.1-STABLE #0: Fri Jan 18 16:20:47 YEKT 2013 It's a powerful computer with a lot of memory #top -S last pid: 45076; load averages: 1.54, 1.46, 1.29 up 0+21:13:28 19:23:46 84 processes: 2 running, 81 sleeping, 1 waiting CPU: 3.1% user, 0.0% nice, 32.1% system, 5.3% interrupt, 59.5% idle Mem: 390M Active, 1441M Inact, 785M Wired, 799M Buf, 5008M Free Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 4 155 ki31 0K 64K RUN 3 71.4H 254.83% idle 13 root 4 -16 - 0K 64K sleep 0 101:52 103.03% ng_queue 0 root 14 -92 0 0K 224K - 2 229:44 16.55% kernel 12 root 17 -84 - 0K 272K WAIT 0 213:32 15.67% intr 40228 root 1 22 0 51060K 25084K select 0 20:27 1.66% snmpd 15052 root 1 52 0 104M 22204K select 2 4:36 0.98% mpd5 19 root 1 16 - 0K 16K syncer 1 0:48 0.20% syncer Its tasks are: NAT via ng_nat and PPPoE server via mpd5. Traffic through - about 300Mbit/s, about 40kpps at peak. Pppoe sessions created - 350 max. ng_nat is configured by by the script: /usr/sbin/ngctl -f- <<-EOF mkpeer ipfw: nat %s out name ipfw:%s %s connect ipfw: %s: %s in msg %s: setaliasaddr 1.1.%s There are 20 such ng_nat nodes, with about 150 clients. Sometimes, the traffic via nat stops. When this happens vmstat reports a lot of FAIL counts vmstat -z | grep -i netgraph ITEM SIZE LIMIT USED FREE REQ FAIL SLEEP NetGraph items: 72, 10266, 1, 376,39178965, 0, 0 NetGraph data items: 72, 10266, 9, 10257,2327948820,2131611,4033 I was tried increase net.graph.maxdata=10240 net.graph.maxalloc=10240 but this doesn't work. It's a new problem (1-2 week). The configuration had been working well for about 5 months and no configuration changes were made leading up to the problems starting. In the last few weeks we have slightly increased traffic (from 270 to 300 mbits) and little more pppoe sessions (300-350). Help me please, how to find and solve my problem?

    Read the article

  • Server 2012 intermittently fails to respond to pings from single host, even with firewall disabled, but responds to non-ICMP requests fine

    - by James Westbury
    This one is kind of weird. I've got the following machines involved: DC01 - 10.1.2.42, Server 2012, domain controller & DNS server, physical machine nagiosv - 10.1.2.35, CentOS 6.4, Nagios, virtual machine CB01 - 10.1.3.81, Ubuntu 12.04 LTS, couchbase server, virtual machine So, I noticed something was wrong while configuring this new Nagios VM. I started seeing DC01's state flapping. I logged into nagiosv when I saw this happening, and attempted to ping DC01, both by FQDN and its IP address. Neither worked. I tried pinging the machine from CB01, which is another VM on the same virtual switch/physical NIC as nagiosv, and that worked fine. Pings still failing from nagiosv at this time. DC01 is also an internal DNS server, so I ran dig google.com from nagiosv, and was able to run a query against DC01 just fine: ;; Query time: 1 msec ;; SERVER: 10.1.2.42#53(10.1.2.42) ;; WHEN: Fri Nov 1 07:53:51 2013 ;; MSG SIZE rcvd: 204 Pings still failing from nagiosv, though. I can ping from DC01 to nagiosv, and that works, and I can still ping from other VMs on the same physical NIC into DC01, and that works. I should mention at this point that I've disabled the firewall on DC01 for testing purposes, and it doesn't make a damned bit of difference. (Even with the firewall enabled, I have a blanket exception for ICMP from the local subnet, so it shouldn't make a difference, but I figured I should test it anyway.) I loaded up Wireshark on DC01 and pinged it from nagiosv again. What I see is a bunch of echo requests coming in and not a single reply going back out. Filtered results here, showing all ICMP traffic during a 15-second period. A few more bits of info: There are no IP conflicts on the network. MAC addresses on the incoming pings match the MAC on the VM. There are no duplicate MACs on the network, as far as I can see. I have absolutely no idea why DC01 is failing to respond, here. Any ideas?

    Read the article

  • Monitoring AWS Systems Behind ElasticBeanStalk

    - by A. Avadis
    So I'm getting a company set up in the Amazon Cloud -- creating IAAS protocol/solutions/standardized implementation, etc while also being the SysAdmin for individual systems, app environments, and day-to-day uptime. One of the biggest issues I'm having is tracking various system/application logs, as well as logging/monitoring/archiving system metrics like memory usage, cpu usage, etc etc In a centralized fashion. E.g. -- Nagios + Urchin. The BIGGEST impediment to my endeavors is the following: The company application is deployed in the form of a Java *.WAR file, uploaded to an Elastic BeanStalk application environment, load balancing and auto-scaling between 3(min) and 10(max) servers, and the EC2's that run the application are fired up and disposed of ad-hoc. That is to say, I can't monitor the individual EC2's for very long because so many are being terminated then auto-provisioned/auto-scaled on the fly -- so I'd constantly be having to "monitor what I'm monitoring", and continuously remove/add EC2 machine addresses to my monitoring lists. IS there some sort of way to use monitoring tools like Zabbix or Nagios to monitor the ElasticBeanStalk, and have it automatically add on new EC2's, and remove terminated/failed EC2's from its monitoring list automatically? Furthermore, is there anything I can do with GrayLog to achieve similar results with the aggregation/centralization of my application logs from multiple EC2 instances into ONE consolidated set of logs/events? If not GrayLog, is there ANYTHING LIKE GrayLog that can automatically detect what EC2 members are being added/removed from the environment, and collect the logs from them automatically? Any and all advice or direction is appreciated. Thanks much, and cheers!!

    Read the article

  • White Screen, No Errors.

    - by GruffTech
    So.. Interesting problem for you guys, As I'm completely lost as to what to do, or where to take the next step. Server & Application Environment. CentOS release 5.3 (Final) Apache 2.2.3-22 EnableSendfile off EnableMMAP off ErrorLog logs/error_log LogLevel debug PHP-5.2.6-2 error_reporting = E_ALL display_errors = on log_errors = on max_execution_time=300 max_input_time=60 memory_limit=512mb Kohana 2.3 PHP Environment. HAProxy 1.3.15.6-2 MemCacheD 1.2.6-1 Our application is split between 3 web servers, mounting a NFS Storage server, and sticky load balancing between the 3 web servers. The application seemingly runs great, but every so often, instead of loading, the application just shows a pure white page. Not a 404 Error, or a 500 Server Error, a clean white page. And it returns instantly, so its not a execution time error. Nothing in the Error log, or Server-Error Log, Proxy log shows standard proxied connection, Just the standard 200-Status in Access log, with 256 bytes transferred. To me, this leads to tell me that the application itself is having a problem. A rare, unexplainable, seemingly random, problem that causes what we've now called the "White Screen of Death." Our developers all say that since there is nothing going to our error logs, that it must be a server problem. But I say the same thing, There's nothing going to ANY of our logs (relevent to this anyway), and we're not having httpd children crash from what i can tell. Any ideas on how i can increase my logs, or somehow prove that its not a bug in PHP, Apache, CentOS, ect? Or if it is somehow a bug, identify it?

    Read the article

  • Squid reverse proxy array - siblings not communicating with each other

    - by V. Romanov
    I want to set up 2 squid servers to act as reverse proxy and cache for a webserver on our intranet. The load balancing will be done with DNS round robin or just different mappings for different clients. The thing is, I want both servers to try and contact each other to see if they have the object required in cache before contacting the webserver for it (the network that servers the webserver is the bottleneck and I'm trying to eliminate it) Both squids are configured the same, here are the relevant config lines : acl dvr1_cache_it_best_tv_com dstdomain dvr1.cache.it.best-tv.com acl squid1_it_best_tv_com dstdomain squid1.it.best-tv.com acl squid2_it_best_tv_com dstdomain squid2.it.best-tv.com http_access allow dvr1_cache_it_best_tv_com http_access allow squid1_it_best_tv_com http_access allow squid2_it_best_tv_com http_access allow all http_port 8081 accel defaultsite=dvr1.cache.it.best-tv.com cache_peer dvr1.origin.it.best-tv.com parent 80 0 no-query originserver name=Proxy_dvr1_origin_it_best_tv_com cache_peer squid1.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid1_it_best_tv_com cache_peer squid2.it.best-tv.com sibling 8081 3130 weight=10 name=Proxy_Squid2_it_best_tv_com cache_peer_access Proxy_dvr1_origin_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid1_it_best_tv_com allow dvr1_cache_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid1_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow squid2_it_best_tv_com cache_peer_access Proxy_squid2_it_best_tv_com allow dvr1_cache_it_best_tv_com just to make it clear - dvr1.cache is the alias for the proxy servers. dvr1.origin is the web server. Both servers work, both serve content and cache it and work fine. However, when I clear the cache on one server and then access it, it gets the content from the parent (DVR1_ORIGIN) instead of going to the sibling squid. What did I configure wrong? Or perhaps I don't understand the architecture correctly? I read the squid manuals but as far as I see i did it all by the book and yet it doesn't work right. Any help will be appreciated!

    Read the article

  • limiting connections from tomcat to IIS - proxy? iptables?

    - by Chris Phillips
    Howdy, I've webapp on tomcat6 which is connecting to an M$ PlayReady DRM instance on IIS6.0 The performance is seen to be best when we bench mark (using ab) the DRM service with 25 concurrent connections, which gives about 250 requests per second, which is ace. higher concurrent connections results in TCP/IP timeouts and other lower level mess. But there is no way to control how the tomcat app connects to the service - it's not internally managing a pool of connections etc, they are all isolated http connections to the server. Ideally I'd like a situation where we can have 25 http 1.1 connections being kept alive permanently from tomcat and requesting the licenses through this static pool of connections, which I think would the best performance. But this is not in the code, so was looking for a way to possibly simulate this at the Linux level. I was possibly thinking that iptables connlimit might be able to gracefully handle these connections, but whilst it could limit, it'd probably still annoy the app. What about a proxy? nginx (or possibly squid) seems potentially appealing to run on the tomcat server and hit on localhost as we might want to add additional DRM servers to use under load balance anyway. Could this take 100 incoming connections from tomcat, accept them all and proxy over the the IIS server in a more respectful manner? Any other angles? EDIT - looking over mod_proxy for apache, which we are already using for conventional use on an apache instance in front of this tomcat instance, might be ideal. I can set a max value on the proxy_pass to only allow 25 connections, and keep them alive permanently. Is that my answer? Many thanks, Chris

    Read the article

  • VIM "upgraded" to expandtab and tabstop=8 on Python files

    - by dotancohen
    After reinstalling my OS from Kubuntu 12.10 to Kubuntu 14.04, VIM has changed its behaviour when editing Python files. Though before the reinstall all file types had noexpandtab and tabstop=4 set, now in Python those values are expandtab and tabstop=8, checked also via VIM behaviour and also via asking VIM set foo?. Non-Python files retain the noexpandtab and tabstop=4 behaviour that I prefer. The .vim direcotry and .vimrc were not touched during the reinstall. It can be seen that no files in .vimrc have been touched in months (with the exception of the irrelevant .netrwhist): - bruno():~$ ls -lat ~/.vim total 68 drwxr-xr-x 85 dotancohen dotancohen 12288 Aug 25 13:00 .. drwxr-xr-x 12 dotancohen dotancohen 4096 Aug 21 11:11 . -rw-r--r-- 1 dotancohen dotancohen 268 Aug 21 11:11 .netrwhist drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 plugin drwxr-xr-x 2 dotancohen dotancohen 4096 Mar 6 18:31 doc drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 syntax drwxrwxr-x 2 dotancohen dotancohen 4096 Nov 29 2013 ftplugin drwxr-xr-x 4 dotancohen dotancohen 4096 Nov 29 2013 autoload drwxrwxr-x 5 dotancohen dotancohen 4096 May 27 2013 after drwxr-xr-x 2 dotancohen dotancohen 4096 Nov 1 2012 spell -rw------- 1 dotancohen dotancohen 138 Aug 14 2012 .directory -rw-rw-r-- 1 dotancohen dotancohen 190 Jul 3 2012 .VimballRecord drwxrwxr-x 2 dotancohen dotancohen 4096 May 12 2012 colors drwxrwxr-x 2 dotancohen dotancohen 4096 Mar 16 2012 mytags drwxrwxr-x 2 dotancohen dotancohen 4096 Feb 14 2012 keymap Though .vimrc has been touched since the reinstall, it was only me testing to see where the problem is. How can I tell what is settingexpandtab and tabstop? Side note: I'm not even sure what I should read in the built-in help for this issue. I started with ":h plugin" but that did not help other than showing me that the following plugins are loaded (possibly relevant): standard-plugin-list Standard plugins pi_getscript.txt Downloading latest version of Vim scripts pi_gzip.txt Reading and writing compressed files pi_netrw.txt Reading and writing files over a network pi_paren.txt Highlight matching parens pi_tar.txt Tar file explorer pi_vimball.txt Create a self-installing Vim script pi_zip.txt Zip archive explorer LOCAL ADDITIONS: local-additions DynamicSigns.txt - Using Signs for different things NrrwRgn.txt A Narrow Region Plugin (similar to Emacs) fugitive.txt A Git wrapper so awesome, it should be illegal indent-object.txt Text objects based on indent levels. taglist.txt Plugin for browsing source code vimwiki.txt A Personal Wiki for Vim

    Read the article

  • SSD/HDD not exceeding 120 MB/s

    - by skiwi
    SO here is the situation: First this was my old PC, it had a 2x 1TB RAID 0 and a Corsair Force 3 SSD in it. This were the old speeds, measured by HDTune Pro. 2x 1TB RAID 0: Corsair Force 3 SSD Then my dad got my PC and we had several issues, in the end turned out both RAID and SSD controller were malfunctioning causing BlueScreens on 100% load. Removed the RAID 0, but leaving the HDD's intact and bought an Samsung 840 EVO 120GB, though the Corsair SSD is still in the system, just not as sytem disk anymore. 1TB HDD (one of them): Corsair SSD: Samsung SSD: We did not assemble the PC ourselves, so answering some technical questions might be more difficult, though we will do our best. First thing we noticed is that the Samsung 840 EVO is no where reaching it's advertised speed, even an Samsung 840 250GB (non-EVO) is reaching 350 MB/s in my own PC. Then we noticed that both SSD's are capped at 120 MB/s exactly, not sure if this is being caused by HDTune Pro, but very unlikely. And even worse, the Corsair Forza 3 was running faster before the system got reassembled. Does anyone have any clue what is going on?

    Read the article

  • How to make the ec2 ami work on the Xen on Debian

    - by user67103
    Hi. Here is the scenario. We are creating a cloud App. We have a Debian on which we create a customized image and upload it to the Amazon EC2. After uploading it to the cloud we have made some more customizations and are trying to rebundle it. We are facing some issues in rebundling it. We would like to know if we could do something like this. a. Create an AMI Image on Debian b. Load it on to the Xen Hypervisor which would be over the Debian c. Customize the image d. Save the customized image e. Upload it to EC2. The issue is that I am unable to find a proper solution on how to install Xen on Debian and will the AMI on Xen work on EC2. Any suggestions/Help would be greatly appreciated. thanks. Krishna

    Read the article

  • what determines the bios you can use

    - by andy
    I have a Compaq sr5710f with a MCP61PM-HM (Iris8) motherboard and loaded a Geforce6100sm-m bios on to it. It works fine and opened up some options, but I want to give my comp am3 socket support. So my question is if I go looking for a bios that will do that for me what do I need to pay attention to. Is it only the chip set which is geforce 6150se nforce 405 or are there more things I should be looking at. If anyone thinks they might know a bios that will help me out that would be good to. I am looking for a retail bios though. I do not want to load any of the experimental ones you can find on the net. Also I would need it to support ddr2 800 ram, and would like to keep am2 support to. I know my bios is data that is downloaded onto my motherboard, or for lack of a better word a program. I am currently running a bios that is not for my motherboard, so I know it can be done. What I need is what do I need to pay attention to when looking for compatible bios programs, that is not for my motherboard.

    Read the article

  • IIS WebServer CreatesNew file: OwnerShip?

    - by Beaud.
    IIS is configured for Integrated Windows Authentication. web.config is configured as follows: <authentication mode="Windows" /> <identity impersonate="true" /> We are Load balancing between \webserver1 and \webserver2. Windows Server 2003 \\webserverX creates a XML file to \\share1 and access is denied. We got pass through access denial by allowing Everyon to access the share... We would like to have the impersonated user to be the owner of the created file. Instead, \\webserver1's computer account is the owner. How can we make sure that the impersonated user has ownership of the file at creation time? PROGRESSION: I decided to create the file locally on \\webserver1's root directory. File's ownership is NETWORK SERVICES even if impersonate="true". I'm unable to change ownership of the file in C# code. Why when creating a file, IIS won't use the impersonated user's write permissions? If it actually does, what I am doing wrong?

    Read the article

  • How do you get AWS VPC EC2 instances to be able to see the AWS APIs?

    - by Peter Mounce
    We're spinning up infrastructure inside of an AWS VPC via CloudFormation. We're using auto-scaling groups to bring up VPC-EC2 instances (so, we don't bring up instances directly; ASGs manage that). Inside of a PVC, EC2 instances only have a private IP; they cannot see the outside world without further work. When these instances spin up, we have some bootstrap tasks that require talking to the various AWS APIs. We also have some ongoing tasks that require AWS API traffic. How are you tackling this apparent chicken-egg problem? We've read about: NAT instances - but don't like this so much because it's another layer to our stack. assigning elastic-IPs to each VPC instance that needs to talk - but a) they all do, and b) since we're using ASGs, we don't know which instances to assign EIPs to at provision-time, and c) we'd need to set up something to monitor those ASGs and assign EIPs when instances are terminated and replaced spinning up an instance (actually, a load-balanced pair, probably spanning AZs) to act as an AWS-API proxy for all API traffic I guess I'm wondering whether there's some kind of back-door we can open that allows our VPC EC2 instances access to the AWS API endpoints, but nothing else, for cheap-complexity setup, that doesn't add another network-hop layer to our infrastructure for serving requests.

    Read the article

  • How to install a desktop environment onto Ubuntu Server -- but without internet access or a CDROM?

    - by James
    I am playing around with a computer which has no CDROM drive or internet access and I have installed Ubuntu Server onto it. I have that all up and running nicely but now I'd like to install Xfce, GNOME or something similar so I can load up a desktop environment from the command line if I wish. Obviously with internet access or a CDROM, this would be a simple task of using apt-get and it finding & retrieving the packages for me, I assume, but I do not have either. I do however have a USB drive and I have used Unetbootin to make it into a bootable drive with the Ubuntu Server disk image files on there. I have mounted the USB drive to /media/usb0 and tried the command "sudo apt-cdrom add -d /media/usb0" to get apt to recognise the USb drive as an "Ubuntu CD" -- a source of package files but apt-get doesn't seem to be finding Xfce.. I try "sudo apt-get install xfce" and "sudo apt-get install xfce4" but neither find the package.. I would prefer to have Xfce but GNOME would be OK too.. My question is, am I doing something wrong? I figured that the Ubuntu Server disk (or rather, my Ubuntu Server USB drive) might not have any desktop environment packages on there so I tried the Xubuntu Desktop disk too (again, from my USB drive). I tried "sudo apt-get install xubuntu-desktop" but it couldn't find the package - even though it is listed under the /casper/ directory in some MANIFEST file. Anyone see where I'm going wrong? Maybe apt-get install is looking somewhere other than my USB drive? Maybe my commands are wrong? Maybe the disks don't even have the desktop environments on!? Thanks in advance guys, any input would be much appreciated. Cheers - James

    Read the article

  • What to do when a device has no driver for Windows 7 but it has Vista, XP drivers

    - by Mehper C. Palavuzlar
    This has always been a bothersome matter for me. Some devices (printers, scanners, etc.) have drivers for older versions of Windows (Vista, XP, 2000, NT) but no driver for Windows 7. What are my chances to install such devices on Windows 7? Example case: I have a Sharp printer & scanner (Sharp AR-122E N) which I have used for my old Windows XP based PC. Now I want to install it on a Windows 7 x64 based PC. Windows 7 cannot load its driver. I used the original driver CD but when I run the setup.exe (which is included in AR122EN111.exe, 6713KB), it says Cannot install driver on this operating system. Supported operating systems are: Windows 2000, XP, Vista. I tried to install the driver using compatibility settings. I tried Windows Vista and Windows XP SP3, but to no avail. The setup gave the same error. I also googled for Windows 7 driver for "Sharp AR-122E N" but it only listed the original driver that I tried. The official site of Sharp does not even list the driver for this product. In the past, the compatibility setting workaround did work for some devices, but this time it failed. What else can I do to overcome this problem?

    Read the article

  • A lots of Apache processes are using my CPU uses always more than 70%

    - by Barkat Ullah
    I am running a plesk panel in 1and1. I have 120 sites running and all are using pligg cms, each site has 600 visitors per day. Please see the details of my server below: HDD-1000GB RAM-16GB Processor-6 Core I always see a lot of apache processes running in my # top view, so the server seems overloaded. If I can reduce the amount apache processes I think the server will be ok. But I don't know why too many apache processes are running. Please see the link below for the screenshot of my # top view: http://dl.dropbox.com/u/26967109/%23Top-2.jpg Sometimes I saw too many connection error in my plesk control panel, so I added the below line in my [mysqld] section: set-variable=max_connections=416 But I didn't find a solution yet. I have also added maxclients and serverlimit 416 in the config /etc/httpd/conf/httpd.conf But no solution yet. I am researching around more than 7 days but don't get any solution. Please help me to solve the problem. In peak hours my sites are taking too much time to load, but off-peak hour it is ok. Please help me to find out the actual problem.

    Read the article

  • IIS 6 ASP.NET default handler-mappings and virtual directories

    - by Mark Lauter
    I'm having a problem with setting a default mapping in IIS 6. I want to secure *.HTML files with ASP.NET forms authentication. The problem seems to have something to do with using virtual directories to hold the html files. Here's how it's setup: sample directory tree c:/inetpub/ (nothing in here) d:/web_files/my_web_apps d:/web_files/my_web_apps/app1/ d:/web_files/my_web_apps/app2/ d:/web_files/my_web_apps/html_files/ app1 and app2 both access the same html_files directory, so html_files is set as a virtual directory in the web apps in IIS... sample web directory tree //app1/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) //app2/html_files/ (points to physical directory: d:/web_files/my_web_apps/html_files/) If I put a file called test.html in the root of //app1/ and then add the default mapping to the asp.net dll and setup my security on the root folder with deny="?", then accessing test.html works exactly as expected. If I'm not authenticated, it takes me to the login.aspx page, and if I am authenticated then it displays test.html. If I put the test.html file in the html_files directory I get a totally different behavior. Now the login.aspx page loads and I stuck some code in to check if I was still authenticated: <p>autheticated: <%=User.Identity.IsAuthenticated%></p> I figured it would say false because why else would it bother to load the login page? Nope, it says true - so it knows i'm authenticated, but it won't give me access to the test.html file. I've spent several hours on this and haven't been able to solve it. I'm going to spend some more time on google to see if I've missed something. Fingers crossed.

    Read the article

  • Drupal on an NFS share has terrible performance

    - by Marcus
    We have a setup where a Drupal 7 site with the following setup - a VMware ESXi 4.1 host server running a web vm and an NFS VM. The web VM is using Apache and mod_php. The site is still in development thus we have to turn off all forms of caching due to the frequently-updated files. Each page request takes around 15-20 seconds to complete. Profiling the PHP code shows that the vast majority of time (normally over 90%) is taking by all the is_dir(), is_file() function calls that load up the modules. I've increased PHP's realpath cache size to several megs and an strace shows that the lstat calls then drop from over 200 to around 6 and stat() decreases a bit (around 600 calls). However, while this has shaved off quite a bit of time, I am simply unable to break past the 10 second per request barrier. Is there a way to get better performance out of this setup that doesn't involve caching? Configs and stats: VMs: web - Centos 6 64bt, 2.5GB RAM, normal CPU/HD prioritisation nfs - Centos 6 64bt, 2GB RAM, normal CPU priority, high HD priority PHP: 32M realpath cache size (it's this high for testing purposes) NFS: ~]# egrep -v '#|^$' /etc/nfsmount.conf [ NFSMount_Global_Options ] Defaultvers=4 Ac=False Rsize=32k Wsize=32k Bsize=32k Reading speeds via NFS are not an issue a dd of a 100M test file using 32k blocks returns: 3200+0 records in 3200+0 records out 104857600 bytes (105 MB) copied, 1.84984 s, 56.7 MB/s real 0m1.857s user 0m0.007s sys 0m0.330s Strace on Apache process with empty realpath cache: % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 50.78 1.157452 337 3434 28 stat 32.58 0.742656 628 1182 425 open 9.29 0.211788 762 278 1 lstat 3.17 0.072322 0 237865 write 2.45 0.055839 490 114 13 access 0.45 0.010262 43 237 brk 0.34 0.007725 10 811 74 read 0.28 0.006340 9 679 fstat 0.22 0.005069 18 281 poll 0.20 0.004533 6 698 getdents 0.09 0.001960 10 190 mmap 0.05 0.001065 14 74 accept4 0.04 0.001000 333 3 chdir 0.03 0.000750 4 190 munmap 0.01 0.000339 0 836 close 0.01 0.000247 3 75 writev 0.00 0.000068 0 611 fcntl 0.00 0.000063 1 77 shutdown 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 5 socket 0.00 0.000000 0 5 5 connect 0.00 0.000000 0 74 getsockname 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 1 futex ------ ----------- ----------- --------- --------- ---------------- Strace after realpaths are cached % time seconds usecs/call calls errors syscall ------ ----------- ----------- --------- --------- ---------------- 60.14 1.371006 484 2831 28 stat 31.79 0.724705 627 1155 425 open 3.53 0.080354 0 237865 write 2.65 0.060433 530 114 13 access 0.43 0.009913 99 100 brk 0.38 0.008730 11 804 74 read 0.35 0.007910 12 675 fstat 0.30 0.006775 10 654 getdents 0.13 0.003065 11 281 poll 0.09 0.002000 333 6 1 lstat 0.07 0.001545 2 807 close 0.05 0.001063 14 74 accept4 0.04 0.001000 6 179 mmap 0.02 0.000404 2 179 munmap 0.01 0.000271 4 75 writev 0.01 0.000212 0 611 fcntl 0.01 0.000129 2 77 shutdown 0.00 0.000022 0 74 getsockname 0.00 0.000000 0 1 lseek 0.00 0.000000 0 5 rt_sigaction 0.00 0.000000 0 1 rt_sigprocmask 0.00 0.000000 0 3 setitimer 0.00 0.000000 0 3 socket 0.00 0.000000 0 3 3 connect 0.00 0.000000 0 15 setsockopt 0.00 0.000000 0 5 getcwd 0.00 0.000000 0 3 chdir ------ ----------- ----------- --------- --------- ---------------- Mount: nfs.xxx.xxx.xxx:/path/to/website/files on /path/to/website/files type nfs (rw,hard,intr,noac,vers=4,addr=xx.xx.xx.xx,clientaddr=xx.xx.xx.xx) Any help is, naturally, appreciated.

    Read the article

  • Windows 7 64 bit Installation freezes after a while durig the setup

    - by vinz243
    I have a windows 7 32 bits on my computer. Because i have 5 gb of ram (kingston) on my asus M2N motherboard and only 3 were able to be used, I bought W7 x64 and install it. It loads the wizard, but after a while, it freezes, and I have to force reboot. It first crashed while unzipping w7 files, but if I wait a while on the terms page for example, it can crash before, which make me think that it is a matter of time. I remember I had the same issue while booting on Ubuntu x64, it crashed randomly but not load completely. No bip or other messages. Configuration: Software OS (before) W7 x86 Pro New OS : W7 x64 Pro Antivirus : avast (bios verification ?) BIOS 03/27/2008 - v08.00.12 Hardware : Motherboard : Asus M2N Processor : AMD Athlon 64 dualcore @ 2.6 GHZ Memory : 5120 MB ((2 + 2) + (1)) NOTES : I ran a memory test using openSUSE cd, though i have not finished it, it ran. EDIT: I tried not to run the setup but wait, and i get the BSOD : A problem... TL;DW IRQL_NOT_LESS_OR_EQUAL If it is.. TL;DW ***STOP: 0x0000000A (0x0000000000000000,0x0000000000000002, 0x0000000000000001, 0xFFFFF8001A49ED1F)

    Read the article

  • logrotate isn't rotating a particular log file (and i think it should be)

    - by Max Williams
    Hi all. For a particular app, i have log files in two places. One of the places has just one log file that i want to use with logrotate, for the other location i want to use logrotate on all log files in that folder. I've set up an entry called millionaire-staging in /etc/logrotate.d and have been testing it by calling logrotate -f millionaire-staging. Here's my entry: #/etc/logrotate.d/millionaire-staging compress rotate 1000 dateext missingok sharedscripts copytruncate /var/www/apps/test.millionaire/log/staging.log { weekly } /var/www/apps/test.millionaire/shared/log/*log { size 40M } So, for the first folder, i want to rotate weekly (this seems to have worked fine). For the other, i want to rotate only when the log files get bigger than 40 meg. When i look in that folder (using the same locator as in the logrotate config), i can see a file in there that's 54M and which hasn't been rotated: $ ls -lh /var/www/apps/test.millionaire/shared/log/*log -rw-r--r-- 1 www-data root 33M 2010-12-29 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 /var/www/apps/test.millionaire/shared/log/test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stderr.log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 /var/www/apps/test.millionaire/shared/log/unicorn.stdout.log Some of the other log files in that folder have been rotated though: $ ls -lh /var/www/apps/test.millionaire/shared/log total 91M -rw-r--r-- 1 www-data root 33M 2010-12-29 15:05 test.millionaire.charanga.com.access-log -rw-r--r-- 1 www-data root 54M 2010-09-10 16:57 test.millionaire.charanga.com.debug-log -rw-r--r-- 1 www-data root 53K 2010-12-14 15:48 test.millionaire.charanga.com.error-log -rw-r--r-- 1 www-data root 3.8M 2010-12-29 14:30 test.millionaire.charanga.com.ssl.access-log -rw-r--r-- 1 www-data root 16K 2010-12-17 15:00 test.millionaire.charanga.com.ssl.error-log -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stderr.log -rw-r--r-- 1 deploy deploy 41K 2010-12-29 11:03 unicorn.stderr.log-20101229.gz -rw-r--r-- 1 deploy deploy 0 2010-12-29 14:49 unicorn.stdout.log -rw-r--r-- 1 deploy deploy 1.1K 2010-10-15 11:05 unicorn.stdout.log-20101229.gz I think what might have happened is that i first ran this config with a pattern matching *.log, and that means it only rotated the two files that ended in .log (as opposed to -log). Then, when i changed the config and ran it again, it won't do any more since it think's its already had its weekly run, or something. Can anyone see what i'm doing wrong? Is it to do with those top folders being owned by root rather than deploy do you think? thanks, max

    Read the article

  • Windows Network File Transfer to Samba server: “Are you sure you want to copy this file without its properties?”

    - by jimp
    I am transferring a lot of files to a new NAS based on OpenMediaVault, with the Samba 3.5.6 service running. I am transferring from Windows 7 64-bit to the NAS, and on some media files Windows is prompting about losing some property data across the transfer. I have never seen this before when transferring to Samba boxes I have built myself (vs this turnkey solution), so I'm guessing there must be a Samba setting I can change to preserve the file properties in question instead of permanently losing whatever they contain (Date Taken? Exposure? Flash Fired? etc). Or maybe I've just never encountered this before; I'm really not sure. I tried adding ea support = yes and store dos attributes = yes to the [global] section, but the problem remains. The Linux file system is ext4 mounted with user_xattr (full options: defaults,acl,user_xattr,noexec,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0) as Samba requires. Any ideas would be greatly appreciated. Thank you! Samba config: [global] workgroup = WORKGROUP server string = %h server include = /etc/samba/dhcp.conf dns proxy = no log level = 2 syslog = 2 log file = /var/log/samba/log.%m max log size = 1000 syslog only = yes panic action = /usr/share/samba/panic-action %d encrypt passwords = true passdb backend = tdbsam obey pam restrictions = yes unix password sync = no passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . pam password change = yes socket options = TCP_NODELAY IPTOS_LOWDELAY guest account = nobody load printers = no disable spoolss = yes printing = bsd printcap name = /dev/null unix extensions = yes wide links = no create mask = 0777 directory mask = 0777 use sendfile = no null passwords = no local master = yes time server = yes wins support = yes ea support = yes store dos attributes = yes Note: I found this related question, but it explains the loss due to the user trying to transfer from NTFS to FAT32.

    Read the article

  • How can I write directly to my Zune HD hard drive?

    - by iamgoat
    When syncing photos to the Zune HD it resizes them down to a much lower resolution which means I cannot load a high res picture on it (comic book) and zoom in to read it. This defeats the whole purpose of having a zoom feature. There is a registry hack you can make to get the Zune to display under My Computer. Then if you killed the zune process while it's syncing you'd be able to access it like a hard drive and copy files to it. It seems like the more recent firmware and/or Zune software version now prevents this. How can I treat it like an HDD and copy files to it? I simply want to take my original pictures folder and copy it over the low resolution versions the Zune software resized it to. An alternative option would be to remove the hard drive from it and see if I can connect it to a computer directly, but I just got this and don't want to disassemble it yet. Note to Microsoft: Why do you allow me to set the encoding quality of music, but not photos?

    Read the article

  • New to building computers worried about temps

    - by dave
    I'm new to building my own computers and I was wondering about maximum temperatures. I understand that the room temp can affect the computers temp but how relevent is it? I understand that if my room temp is 20°C none of my computer parts could be lower than that. But if my room is 27°C instead of 20°C would this cause my computers parts to heat up more/faster? My new computer I built myself for gaming is i7 2600k 16gb ram ddr3 1600 hd6970 2 gb 240gb ssd ( bought a nas with 3 2tb drives in raid 5 for my home network ) 850w modular psu I also have my old hp computer i3 2120 8gb ram hd6770 1tb hdd I also have 3 laptops in my household, but I am not worried about their temps, they heat up my legs but they are never under stress. Due to size and money reasons I used an old case and it only has one of the sides left on it. Is this bad for the computer and will the extra dust cause problems? Or should I leave it this way or take the missus wrath and buy a case? If so is there any certain case I should get? I don't care about looks I just want card reader and usb slots and for it to run as cool or cooler than now, my case has 1 fan. Also what are the max temps for my new and old computer parts? Is 40°C under load ok for my CPU, what about 70°C for my GPU is that ok too, or should I worry? What are normal and safe temps for my components? I have looked around but there seem to be lots of different answers. I know that 100°C is bad but I want my parts to last as long as possible and this site always seems to give good replies without arguing or flaming.

    Read the article

< Previous Page | 872 873 874 875 876 877 878 879 880 881 882 883  | Next Page >