Search Results

Search found 11685 results on 468 pages for 'intel core i5'.

Page 387/468 | < Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >

  • Web filtering (Proxy or DNS) with option for users to ignore the block

    - by Jon Rhoades
    We are struggling with our users visiting infected or "attack" sites and Phising in general. Most of our machines are protected by an Enterprise anti virus and monitoring solution (McAffe ePO) and we try to get people to use Firefox... But no AV is perfect and we have to endure personal machines as well (albeit on their own 'Plague' VLANs) and would like to do something about Phishing as our users seem intent on disclosing their passwords to the world... To complicate matters we don't want to implement a block for many many reasons instead we would like to implement something akin to Firefox's "Reported Scam/Phish/Attack Site" - "Get me out of here" or crucially "Let me in anyway", giving the user a choice to still infect themselves if they feel like it (or look at a site incorrectly blacklisted). The reason we can't just use Firefox is we have a core enterprise App only certified on IE6&7 - thank you Oracle. Is it possible to implement this type of advisory filtering either using a proxy (in our case Squid) or DNS? http://serverfault.com/questions/15801/what-free-options-are-available-for-web-content-filtering http://serverfault.com/questions/47520/open-source-filtering-of-https-traffic Were a good start, but they don't address the advisory aspect of the filtering.

    Read the article

  • Cisco Catalyst 4500 Policy Based Routing

    - by Logan
    In order to test a new firewall I just set up I'm trying to implement policy based routing on our core switch. I want traffic from certain vlans to be routed to the new firewall while everything else continues being routed through the old firewall. I was trying to use this guide. Everything from that guide works fine except trying to run the "ip policy route-map" command in the interface configuration mode. IOS is telling me that such a command doesn't exist. A "show ip interface vlan" command says that policy routing is disabled. Any ideas? Output of "show ver": Cisco IOS Software, Catalyst 4500 L3 Switch Software (cat4500-IPBASEK9-M), Version 12.2(53)SG, RELEASE SOFTWARE (fc3) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2009 by Cisco Systems, Inc. Compiled Thu 16-Jul-09 19:49 by prod_rel_team Image text-base: 0x10000000, data-base: 0x11D1E3CC ROM: 12.2(31r)SG2 Dagobah Revision 226, Swamp Revision 34 RTTMCB2223-1 uptime is 3 years, 22 weeks, 2 days, 19 hours, 28 minutes Uptime for this control processor is 51 weeks, 2 days, 18 hours, 2 minutes System returned to ROM by power-on System restarted at 19:22:02 UTC Tue Jul 12 2011 System image file is "bootflash:cat4500-ipbasek9-mz.122-53.sg.bin" ... cisco WS-C4510R (MPC8245) processor (revision 4) with 524288K bytes of memory. Processor board ID FOX103703W3 MPC8245 CPU at 400Mhz, Supervisor V Last reset from PowerUp 42 Virtual Ethernet interfaces 244 Gigabit Ethernet interfaces 511K bytes of non-volatile configuration memory. Configuration register is 0x2

    Read the article

  • Media Center setup won't complete for watching TV

    - by Robert
    I have a problem watching TV in Media Center. The TV constantly pauses 1/2 second then plays 1 second, pauses 1/2 second, plays 1 second - it is constant and does not vary. This problem occurs on all channels, live or recorded. The bottom 5th of the screen is solid green. I know the problem is Media Center because I can use Pinnacle's TVCenterPro and there is no skipping/pausing (and not green on bottom). I was using cable, and switched to DirecTV (satellite). Trying to do "Set up TV signal" in Media Center seems to be what broke it. I get an error "IR Hardware not detected." I can use the remote to "try again" - so the IR hardware works fine (Media Center's remote/sensor). I tried plugging the IR Blaster into both ports, and I tried a different USB port for the IR receiver. I can't complete the setup. Media Center was playing it okay before I tried to run setup. (I ran setup to try to do recording with Media Center.) Pinnacle PCTV 800i HD PCI card (coax cable from DirecTV tuner), ATI Radeon HD 3200 Graphics, Windows XP SP3 Media Center Edition, AMD Athlon Dual Core 2.5 GHz, 1.75 GB RAM.

    Read the article

  • Does AMD Cool n Quiet Slow Down Your System?

    - by Software Monkey
    I discovered today that having AMD Cool n Quiet enabled in my BIOS appears to be slowing down my Windows XP SP2 system by about 29% on memory & CPU intensive workloads. I was wondering if (a) anyone else had encountered this, (b) anyone can offer an explanation, (c) there are any negatives I need to be aware of if I keep AMD CnQ disabled. With some superficial testing so far, I don't immediately notice any difference with CnQ off (other than the performance being what I expected from this new hardware). It seems to ramp up the CPU fan a little bit as my program maxes out 1 core, but that's the same as with CnQ on. And when I let the system idle the CPU fan slows down and the systems as quiet as a mouse (after years of 6 small fans churning like they want to go into orbit it's nice to again have a system where I can hear the HDDs seeking). Bonus question: Does CnQ cause issues with system stability? I ask because the reason I disabled it was because I have had a few freezes and 1 spontaneous reboot with my new hardware.

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ >> It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • dd oflag=direct 5x fast

    - by César
    I have Centos 6.2 in server with this specs: 2xCPU 16 Core AMD Opteron 6282 SE 64GB RAM Raid controller H700 1GB cache NV - 2HD 74GB SAS 15Krpm RAID1 stripe 16k (OS Centos 6.2) sda - 4HD 146GB SAS 15Krpm RAID10 stripe 16k (ext4 bs 4096, no barriers) sdb -> /vol01 Raid controller H800 1GB cache nv - MD1200 12HD 300GB SAS 15Krpm RAID10 stripe 256k (For DB Postgres 8.3.18) (ext4 bs 4096, stride 64, stripe-width 384, no barriers) sdc -> /vol02 I'm benchmarking IO speed with dd, and view thah if in RAID10 12 disk exec: dd if=/dev/zero of=DD bs=8M count=10000 oflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 126,03 s, 666 MB/s but if I remove "oflag=direct" option obtain about 80 MB/s. In read benchmark, results are similar: dd of=/dev/null if=DD bs=8M count=10000 iflag=direct 10000+0 records in 10000+0 records out 83886080000 bytes (84 GB) copied, 79,5918 s, 1,1 GB/s If remove iflag=direct obtain 150MB/s... I don't understand this huge differences, on other machines y don't have this behavior. Can I have some kernel parameter misconfigured? Thanks!

    Read the article

  • Analyze a BSOD (irql_less_than_or_equal)

    - by Bruno Reis
    Hello. About 2 months ago I bought a new system and built it at home: Mother board: XFX X58i Processor: Core i7 920, using the stock cooler Memory: 3x2GB Corsair DDR3 1600 Video card: NVIDIA GTS 250 (1GB) Hard disk: 2x WD 500GB, 7200rpm I have 2 screens plugged into the video card, and the system is connected to a 550W PSU. Nothing is overclocked. After building the system, I stressed it a lot with Prime95 and rthdribl to check its stability. All my tests were perfect. So I reinstalled Win 7 x64 Professional and started using it normally. The first week (2010-03-15) I got the infamous irql_less_than_or_equal BSOD. Ten days after (2010-03-24) I got another one. Then on 2010-04-09, 2010-05-04. Since 2 days ago it became worse: I got one bluescreen per day! (2010-05-12, 2010-05-13, 2010-05-14). I installed BlueScreenView to try to obtain some information, but I'm not able to extract any useful information apart from the bug check string (irql_less_than_or_equal), and that it was caused by ntoskrnl.exe (the first three at ntoskrnl.exe+71f00, the last 4 at ntoskrnl.exe+70600 -- which I suspect could be the same thing, as Microsoft could have patched this file in the mean time, so the address of the function causing it changed). Then I stressed my memory sticks with memtest, they worked perfectly. After booting, I've stressed my GPU with FurMark and RTHDRIBL, everything was fine. Then I stressed the CPU with 4 instances of Prime95 while monitoring the temperature -- that never exceeded 85oC with the case closed --, everything fine. Finally I've stressed the whole system with HeavyLoad for a looooong time, everything worked just fine. So, I have stressed most of the components of the system, but couldn't get any useful information from it. Do you have any hint on what else can I do to find the culprit? Thanks Bruno

    Read the article

  • Git fails to push with error 'out of memory'

    - by jwir3
    I'm using gitosis on a server that has a low amount of memory, specifically around 512 MB. When I try to push a large folder (happens to be a backup from an android phone), I get: me@corellia:~/Configs/$ git push origin master Counting objects: 18, done. Delta compression using up to 8 threads. Compressing objects: 100% (14/14), done. fatal: Out of memory, malloc failed MiB | 685 KiB/s error: pack-objects died of signal 13 error: failed to push some refs to 'git@dagobah:Configs' I've been searching the web, and notably found: http://www.mail-archive.com/[email protected]/msg01747.html as well as http://git.661346.n2.nabble.com/Out-of-memory-error-during-git-push-td5443705.html but these don't seem to help me for two reasons: 1) I am not actually out of memory when I push. When I run 'top' during the push, I get: 24262 git 18 0 16204 6084 1096 S 2 1.2 0:00.12 git-unpack-obje Also, during the push if I run /head/meminfo, I get: MemTotal: 524288 kB MemFree: 289408 kB Buffers: 0 kB Cached: 0 kB SwapCached: 0 kB Active: 0 kB Inactive: 0 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 524288 kB So, it seems that I have enough memory free, but it's actually still failing, and I'm not enough of a git guru to figure out what is happening. I would appreciate it if someone could give me a hand here and tell me what could be causing this problem, and what I can do to solve it. Thanks! EDIT: The output of running the ulimit -a command: scottj@dagobah:~$ ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 204800 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 204800 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • Poor SSL performance with vsftpd

    - by petrus
    I'm trying to tweak vsftpd to achieve maximum performance for my usage: I have only one or two clients that connect to the server. File size is between ~15MB and 1GB. Typical transfer batch represent between 1 and 2GB of data. For testing purposes, I'm using a tmpfs on both sides (thus eliminating any disks bottleneck) with a single 1GB file. When SSL is disabled, performance is good, with a transfer rate at ~120MB/s (reaching the limits of gigabit networking). With SSL enabled only for control traffic (and not data traffic), performance drops at about 112MB/s, which is still within the acceptable limits. However, when SSL is enabled for data flows, the transfer speed drops dramatically: 6.7MB/s using 3DES & SHA (ssl_ciphers=DES-CBC3-SHA in vsftpd.conf) 16MB/s using DES & SHA (ssl_ciphers=DES-CBC-SHA) I didn't tested other ciphers, but from what I can see from the CPU usage during the transfer, it seems that vsftpd is only using a single cpu/core per client. While this can fit for large ftp sites with hundreds of clients, I'd like to avoid this behavior and use more ressources on the server. On a side note, if you have any ideas regarding other openssl ciphers...

    Read the article

  • Restricting memory area for linux kernel

    - by user1066789
    I am running ltib linux on P1022RDK (P1022 Core) platform. I have 512 MB = 0x20000000 memory. I want my linux kernel to use second half of the board memory (i.e from 256 MB to 512 MB) and want first half of memory to be reserved for some other purpose. For this I am building linux kernel using ltib. For that purpose I am setting following kernel configuration. Please suggest if I am doing it the right way. CONFIG_LOWMEM_SIZE = 0x10000000 # 256 MB CONFIG_PHYSICAL_START = 0x10000000 # Starting from 256MB (second half of memory) On the Uboot I am loading the kernel as following way setenv loadaddr 0x11000000 # Kernel base = 0x10000000 + 0x01000000 (offset) setenv fdtaddr 0x10c00000 # Kernel base = 0x10000000 + 0x00c00000 (offset) bootm $loadaddr - $fdtaddr My kernel Load address is 0x10000000 & kernel entry point is 0x10000000 Doing above configuration / steps my kernel stuck at following on Uboot ## Booting kernel from Legacy Image at 11000000 ... Image Name: Linux-2.6.32.13 Image Type: PowerPC Linux Kernel Image (gzip compressed) Data Size: 3352851 Bytes = 3.2 MB Load Address: 10000000 Entry Point: 10000000 Verifying Checksum ... OK ## Flattened Device Tree blob at 10c00000 Booting using the fdt blob at 0x10c00000 Uncompressing Kernel Image ... OK ================ It should uncompress FDT here & continue ============== Any thoughts ?

    Read the article

  • Reasonable automatic HTML to PDF conversion (in UNIX/Linux environment)

    - by Alex Balashov
    Is there a way to generate PDF documents from HTML files automatically in Linux where the PDF offers some kind of reasonable level of resemblance to the input file? A command-line tool - as opposed to an interactive GUI of some kind - is key. I have tried htmldoc and some related cousins, of course. But these tools are hopelessly stone-age; htmldoc doesn't support CSS at all. You won't find a lot of HTML documents these days that don't have at least some CSS styling. I don't really care about stupid effects or minor embellishments, but the issue is that CSS is at the core of most layouts these days; not many folks are using 6 layers of nested tables anymore. So, if the conversion tool has no grasp of CSS whatsoever, it's not just a matter of "the document doesn't look quite right"; it is likely to not meet the minimum standard of usability at all. It has been suggested to me by some folks to try to use the Gecko rendering engine to generate images that can be converted to PDFs, but I have no idea how one would go about doing this, let alone easily. I have no trouble believing that there are good commercial tools that do this, but I'm really looking for an open-source package if possible, as the endeavour itself is an open-source one and doesn't pay. Thanks in advance!

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • What does this mean: "SATP VMW_SATP_LOCAL does not support device configuration"?

    - by Jason Tan
    Can anyone tell me what this means in ESXi 5.1?: SATP VMW_SATP_LOCAL does not support device configuration I've googled it and I get a lot of results, but as yet all the pages that contain the string are discussing other matters. The storage array is a HDS HUS-VM and the hosts are HP b460c G8 blades with flex fabric and flex fabric VCs which I am in the process of commissioning and would like to get it started on the right foot - i.e. error and warning free! naa.600508b1001c56ee3d70da65f071da23 Device Display Name: HP Serial Attached SCSI Disk (naa.600508b1001c56ee3d70da65f071da23) Storage Array Type: VMW_SATP_LOCAL Storage Array Type Device Config: SATP VMW_SATP_LOCAL does not support device configuration. Path Selection Policy: VMW_PSP_FIXED Path Selection Policy Device Config: {preferred=vmhba0:C0:T0:L1;current=vmhba0:C0:T0:L1} Path Selection Policy Device Custom Config: Working Paths: vmhba0:C0:T0:L1 Is Local SAS Device: true Is Boot USB Device: false This is the same LUN: ~ # esxcli storage core device list -d naa.60060e80132757005020275700000016 naa.60060e80132757005020275700000016 Display Name: HITACHI Fibre Channel Disk (naa.60060e80132757005020275700000016) Has Settable Display Name: true Size: 204800 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60060e80132757005020275700000016 Vendor: HITACHI Model: OPEN-V Revision: 5001 SCSI Level: 2 Is Pseudo: false Status: degraded Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: unknown Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020001000060060e801327570050202757000000164f50454e2d56 Is Local SAS Device: false Is Boot USB Device: false ~ #

    Read the article

  • Apache2 slow serving static while healthy

    - by user45339
    My Apache status looks like; 201 requests/sec - 98.8 kB/second - 504 B/request 85 requests currently being processed, 345 idle workers _____CCW_C_____C__C__C_R____C_WC_________C__C____CW__C__CCC_____ __C____W______C___C___CW__C_C______C__W_C__C_____CCC____C______R CC_C_______C___C____C______________C______C__C________________C_ ___________________C______________________C_______C___C_____C___ CC____C__C___R_____C_C_CC__________C___C___________R____C_C_C___ ______C______W_W__W___C____________________C__WCC__R__R_C_______ R__RC________________________C___R____W__C____.................. .................................................... Server load is average 2 on a 4 core machine. IO utilization is 10-15% and doesn't have many jumps over 70%. Machine has almost 4 gb free and uses 0 swap. The site on the machine is a PHP site. All PHP code is optimized and fast mostly when it gets accessed, however sometimes requests get stuck. Stuck meaning; no response for at least 10 sec. We debugged the PHP code, but it is quite optimal and fast. We spend a lot of time on it until we decided to test the requesting of: <html><body>test</body></html> test.html page. This static resource also gets 'stuck' in the same manner the php pages get 'stuck'. How is the possible given the health of the system? I tested the network, but, when the PHP shows 'slowness' in the site monitoring, the html test files also take (far longer) than 10 sec to load using; time lynx -dump http://127.0.0.1/test.html We are kind of desperate to solve this problem, but we cannot seem to tackle it.

    Read the article

  • Adobe premiere CS5 problem with the display driver

    - by user30179
    This error is really hindering our project. I get an error, it started showing-up June 16th 2010. There are no windows updates at the on the same date as the error, other than (Windows Defender) Seems to happen when working with Image overlays. ERROR: "The NVIDIA OpenGL driver detected a problem with the display driver and is unable to continue. The application must close." We opened the side of the case in the possibility there is an over heating problem. Nvidia Driver ver 8.16.11.9175 (nVidia Quadro FX 1700) I am running: Windows 7 x64 Adobe premiere CS5 Production nVidia Quadro FX 1700 (MRGA14L) 4 Gig ram RAID 10 2 750GB drives Duo core 3.0 6MB L2 Cache This is at least three other people that have come across this error: NVidia Forum EVGA Forum NVidia Forum UPDATE: Having the case open did not help. I also installed New Nvidia drivers now I get a different error: *ERROR:*Your hardware configuration does not meet minimum specifications needed to run the application. The application must close. I ran Windows Update and installed all four updates so now I am waiting to see if the error occurs again. Anything beyond this I am out of options.

    Read the article

  • Media Center setup won't complete for watching TV

    - by Robert
    I have a problem watching TV in Media Center. The TV constantly pauses 1/2 second then plays 1 second, pauses 1/2 second, plays 1 second - it is constant and does not vary. This problem occurs on all channels, live or recorded. The bottom 5th of the screen is solid green. I know the problem is Media Center because I can use Pinnacle's TVCenterPro to watch TV and there is no skipping/pausing (and not green on bottom). I was using cable, and switched to DirecTV (satellite). Trying to do "Set up TV signal" in Media Center seems to be what broke it. I get an error "IR Hardware not detected." I can use the remote to "try again" - so the IR hardware works fine (Media Center's remote/sensor). I tried plugging the IR Blaster into both ports, and I tried a different USB port for the IR receiver. I can't complete the setup. Media Center was playing TV okay (with the new DirecTV) before I tried to run setup. (I ran setup to try to do recording with Media Center.) Hardware/Software: Pinnacle PCTV 800i HD PCI card (coax cable from DirecTV tuner), ATI Radeon HD 3200 Graphics, Windows XP SP3 Media Center Edition, AMD Athlon Dual Core 2.5 GHz, 1.75 GB RAM.

    Read the article

  • Windows 7 - svchost high cpu usage.

    - by Leonardo
    Hey guys! I'm having a problem with windows 7 x64 i though it was slow and all then i saw that the cpu usage was always around 80% and started digging through google. there's two svchost consuming around 30% each and in the resources monitor there's a system interrupts consuming 45% all the time, i trid closing the aplications and makes no diference. so i tried some other things that i've found on gloogle like disable system update but didn't work. i'd love some help here. i don't know if it will help but here's my specs: Core 2 duo 4400 ATI radeon 4850 4gb ram DDR2 thanks anyway for your attention :) EDIT So i run the program and i got this info, did i get it right? EDIT As you asked here it is, did i get it right now? the other tcp/ip there's nothing. thanks again! :D EDIT I tried somthing here, i run msconfig and took the services that one of the svchost was using out of the startup and now my cpu is around 50%, but i still would like to make this better, i can't lose that much cpu power just because windows... thanks. EDIT yeah there's nothing i can do here, going to install xp for a while, it's really weird...

    Read the article

  • Shared files folder in Amazon Elastic Beanstalk environment

    - by por
    I'm working on a Drupal application, which is planned to be hosted in Amazon Elastic Beanstalk environment. Basically, Elastic Beanstalk enables the application to scale automatically by starting additional web server instances based on predefined rules. The shared database is running on an Amazon RDS instance, which all instances can access properly. The problem is the shared files folder (sites/default/files). We're using git as SCM, and with it we're able to deploy new versions by executing $ git aws.push. In the background Elastic Beanstalk automatically deletes ($ rm -rf) the current codebase from all servers running in the environment, and deploys the new version. The plan was to use S3 (s3fs) for shared files in the staging environment, and NFS in the production environment. We've managed to set up the environment to the extent where the shared files folder is mounted after a reboot properly. But... The Problem is that, in this setup, the deployment of new versions on running instances fail because $ rm -rf can't remove the mounted directory, and as result, the entire environment goes down and we need restart the environment, which isn't really an elegant solution. Question #1 is that what would be the proper way to manage shared files in this kind of deployment? Are you running such an environment? How did you solve the problem? By looking at Elastic Beanstalk Hostmanager code (Ruby) there seems be a way to hook our functionality (unmount if mounted in pre-deploy and mount in post-deploy) into Hostmanager (/opt/hostmanager/srv/lib/elasticbeanstalk/hostmanager/applications/phpapplication.rb) but the scripts defined in the file (i.e. /tmp/php_post_deploy_app.sh) don't seem to be working. That might be because our Ruby skills are non-existent. Question #2 is that did you manage to hook your functionality in Hostmanager in a portable way (i.e. by not changing the core Hostmanager files)?

    Read the article

  • Black Screen on Logon (windows 7 home premium)

    - by Blacknight334
    i have been having some trouble with a dell Xps 15 laptop that i recently purchased. it is under a month old, and a problem has occurred, upon logging on just after start up, the computer will just sit on a black log on screen (with the mouse still visible and active) for a few minutes. it is extremely annoying, especially when im in a rush. the laptop is under a month old. So far, i have tried to update the drivers, all windows update, and still, nothing. also, it doesnt seem to do it when i log into safe mode, or if it does, it will do it for less than 10 seconds, then load the desktop (in normal boot, it usually takes a few minutes). i have also run a number of the inbuilt diagnostics, but found no errors. i want to avoid having to do a system restore for as long as i can. does anyone know anything that can help? (the laptop is running a 500gb SSD, 2gb Nvidia 640m, 8gb ram, 3rd gen i7 quad core with 8 threads) thanks.

    Read the article

  • SATA Windows 7 Problems

    - by Isaacs
    Scenario: Core 2 Duo processor, Gigabyte MB, 4 SATA Western digital 500 GB hard drives, windows 7 64 bit. Problem: Copying data from USB or among SATA hard drives is faulty. When trying to copy 20GB from one HD to another it starts off with normal ~14-15 MB/s transfer rates and eventually bogs down to < 120KB/s transfer rates. If I leave it alone over night I come back with my computer crashed and setting at BIOS detecting hard drives. Troubleshooting: Removed all but 1 HD with OS on it, everything seems to be happy. I can copy large files from USB HD to main/single HD. Ran SpinRite on all hard drives, no errors found. Tried adding one HD to machine and problem exists, tried switching SATA cables, and SATA ports on MB. Reinstalled windows 7 x2 (from different disks..). Oddly enough if I boot to a ubuntu everything works fine. Getting ready to purchase a new MB, but wanted to see if anyone had suggestions. Thanks!

    Read the article

  • How can I debug a port/connectivity issue?

    - by rfw21
    I am running a simple WebSocket server on Amazon EC2 (Fedora Core). I've opened the relevant port using ec2-authorize, and checked that it's opened. Iptables is definitely not running. However I can't connect to the port from outside EC2. I've tried the following (my server is running on port 7000): telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (from within EC2: connects fine) nmap localhost (output includes line: 7000/tcp open afs3-fileserver) telnet ec2-public-dns.xx.xx.xx.amazon.com 7000 (this time from my local machine: I get "connection refused: Unable to connect to remote host") The strange thing is this: if I start Nginx on port 7000 then it works and I can connect from outside EC2! And the WebSocket server fails on port 80, where Nginx works fine. To me this suggests a problem with the WebSocket server, BUT I can connect to it successfully from within EC2. (And it works fine on a different VPS account). How can I debug this further? If anybody can stop me tearing my hair out, I'd be very grateful indeed :)

    Read the article

  • How to manage processes-to-CPU cores affinities ?

    - by Philippe
    I use a distributed user-space filesystem (GlusterFS) and I would like to be sure GlusterFS processes will always have the computing power they need. Each execution node of my grid have 2 CPU, with 4 cores per CPU and 2 threads per core (16 "processors" are seen by Linux). My goal is to guarantee that GlusterFS processes have enough processing power to be reliable, responsive and fast. (There is no marketing here, just the dreams of a sysadmin ;-) I consider two main points : GlusterFS processes I/O for data access (on local disks, or remote disks) I thought about binding the Linux Kernel and GlusterFS instances on a specific "processor". I would like to be sure that : No grid job will impact the kernel and the GlusterFS instances Researchers jobs won't be affected by system processes (I'd like to reserve a pool of cores to job execution and be sure that no system process will use these CPUs) But what about I/O ? As we handle a huge amount of data (several terabytes), we'll have a lot of interuptions. How can I distribute these operations on my processors ? What are the "best practices" ? Thanks for your comments!

    Read the article

  • Probability of Blade Chassis Failure

    - by ChrisZZ
    In my organisation we are thinking about buying blade servers - instead of rack servers. Of course technology vendors also make them sound very nice. A concern, that I read very often in different forums, is, that there is a theoretical possibility of the server chassis going down - which would in consequence take all the blades down. That is due to shared infrastructure. My reaction on this probability would be to have redundancy and by two chassis instead of one (very costly of course). Some people (including e.g. HP Vendors) try to convince us, that the chassis is very very unlikely to fail, due to many redundancies (redundant power supply, etc.). Another concern on my side is, that if something goes down, spare parts might be required - which is difficult in our location (Ethiopia). So I would ask to experienced administrators, that have managed blade server: What is your experience? Do they go down as a whole - and what is the sensible shared infrastructure, that might fail? That question could be extended to shared storage. Again I would say, that we need two storage units instead of only one - and again the vendors say, that this things are so rock solid, that no failure is expected. Well - I can hardly believe, that such a critical infrastructure can be very reliable without redundancy - but maybe you can tell me, whether you have successfull blade-based projects, that work without redundancy in its core parts (chassis, storage...) At the moment, we look at HP - as IBM looks much to expensive... thanks a lot best regards Christian

    Read the article

  • How do I install git/git-svn on RHEL5 with a custom perl install?

    - by kbosak
    I've had nothing but trouble trying to install Git on RHEL5. First I tried from source, but ran into several issues with installing the docs. There appeared to be missing libs and such for parsing xml that I couldn't figure out how to get installed and recognized. Then I tried using the EPEL yum repository and was able to install git and its docs but now git-svn is not working. It complains about not finding the perl modules Git.pm and SVN/Core.pm. When I set the GITPERLLIB environment variable to the location of those libs it seg faults. Some background: RHEL5 came with perl 5.8.8, but we wanted to use 5.10 so I installed that from source (to a custom location). Someone then symlinked the system perl binary to this newer version of Perl to make sure nobody uses the wrong version. Each developer also has their own build of Perl. So I'm wondering what's the best way to install Git on this system and have both the docs and git-svn working correctly for each user. Unfortunately I'm a developer and not as good with system administration so take it easy on me.

    Read the article

  • Win2008/IIS7/fx2.0 - 500.19 error

    - by Keith Barrows
    I installed new boxes at the beginning of the week. 1) Web Server on Win2008 x64, IIS 7 + all updates 2) DB Server on Win2008 x64, SQL 2008 Ent + all updates I configured my websites, set up host headers and DNS entries, worked through some problems on my handlers and finally got it all running Wednesday morning. Our team has been using it since then. This morning I came in and everyone of us is getting a 500 error. Error Summary HTTP Error 500.19 - Internal Server Error The requested page cannot be accessed because the related configuration data for the page is invalid. Detailed Error Information Module IIS Web Core Notification Unknown Handler Not yet determined Error Code 0x80070005 Config Error Cannot read configuration file due to insufficient permissions Config File \?\C:\RivWorks\dev\web.config Requested URL http://dev.rivworks.com:80/login.aspx Physical Path Logon Method Not yet determined Logon User Not yet determined Config Source -1: 0: Links and More InformationThis error occurs when there is a problem reading the configuration file for the Web server or Web application. In some cases, the event logs may contain more information about what caused this error. I’ve gone through the KB articles, made sure IIS_IUSRS had read permissions and am now stumped. What bothers me is IIS is looking in \?\C:\ instead of just C:. What is happening? TIA

    Read the article

< Previous Page | 383 384 385 386 387 388 389 390 391 392 393 394  | Next Page >