Search Results

Search found 80052 results on 3203 pages for 'data load performance'.

Page 278/3203 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • Passing data between the VirtualBox Host and the Guest

    - by Fat Bloke
    Here's a good question: "How can you figure out the VM name from within the VM itself?" While this data is not automatically available, the general purpose, and very powerful VirtualBox "GuestProperty" APIs can be used from the host and guest to pass arbitrary data, in key/value pairs format, in and out of the guest. Note that this does require that the VirtualBox Guest Additions have been installed in the guest. To play with this, try using the "VBoxManage" command line on your VirtualBox host machine, and "VBoxControl" in the guest. Host syntax VBoxManage guestproperty get <vmname>|<uuid> <property> [--verbose] VBoxManage guestproperty set <vmname>|<uuid> <property> [<value> [--flags <flags>]] VBoxManage guestproperty enumerate <vmname>|<uuid> [--patterns <patterns>] VBoxManage guestproperty wait <vmname>|<uuid> <patterns> [--timeout <msec>] [--fail-on-timeout]   Guest syntax VBoxControl.exe guestproperty        get <property> [-verbose] VBoxControl.exe guestproperty        set <property> [<value> [-flags <flags>]] VBoxControl.exe guestproperty        enumerate [-patterns <patterns>] VBoxControl.exe guestproperty        wait <patterns>                                      [-timestamp <last timestamp>]                                      [-timeout <timeout in ms>  So to solve our problem above, we set the vm name in the Host system on an arbitrary key like this: $ VBoxManage guestproperty set "Windows 7 (x64)" /MyData/VMname "Windows 7 (x64)" And within the guest we can use: C:\Program Files\Oracle\VirtualBox Guest Additions>VBoxControl.exe guestproperty get /MyData/VMname Oracle VM VirtualBox Guest Additions Command Line Management Interface Version 4.1.14 (C) 2008-2012 Oracle Corporation All rights reserved. Value: Windows 7 (x64) The GuestProperty API is pretty powerful, so for the interested, get more info in the User Manual. - FB 

    Read the article

  • How to save data from model without any association in cakephp [on hold]

    - by Abhishek
    I have base model in that i use dataManipulation method for updation in my code ,so i want to save data in receipt, receiptline model also in OpeningBankStatement.but i create association for receipt and receiptline not for OpeningBankStatement. So i want to save data this OpeningBankStatement model without any association.my demo code is. Array ( [Receipt] => Array ( [ID] => 566 [ObjectType] => 84 [TXNName] => bbnm [TXNDate] => 03-06-2014 [BranchID] => 1 [Narration1] => 267 [Narration] => Cheque Received [ExecutiveID] => 805 [AccountType] => 104 [Account] => 68 [ReferenceNo] => [TXNCurrencyID] => 3 [ExchangeRate] => 1.00000 [ManualAdiustment] => 0 [RevisionNumber] => 1 [CompanyID] => 1 [Status] => 633 ) [ReceiptLine] => Array ( [0] => Array ( [TXNID] => 566 [LineNo] => 0 [LineType_072] => 429 [BranchID] => 1 [AccountID] => 68 [ContactID] => [Amount] => 0 [CancelAmount] => 0 [OpenAmount] => 0 [Narration] => Cheque Received [CreatedBy] => 229 [ModifiedBy] => 229 [CreatedDate] => 2014-06-03 00:00:00 [ModifiedDate] => 2014-06-03 00:00:00 [Status] => 1 [RevisionNumber] => 1 [RowState] => [tmpInstrumentDate] => ) [1] => Array ( [LineNo] => 0 [RowState] => 436 [TXNID] => 0 [BranchID] => 1 [ContactID] => [AccountID] => 68 [Narration] => Cheque Received [Amount] => 0 [RevisionNumber] => 1 [LineType_072] => 460 [CancelAmount] => 0 [OpenAmount] => 0 [Status] => 1 ) ) [OpeningBankStatement] => Array ( [ObjectType] => 131 [TXNSeries] => 1 [TXNNo] => 12345 [TXNName] => bbnm [TXNDate] => 03-06-2014 [CompanyID] => 1 [AccountID] => 68 [ExecutiveID] => 805 [Narration] => Cheque Received [ReferenceNo] => [ParentObjectType] => 84 [ParentTXNID] => 1 [CancelledBy] => 1 [CancelledDate] => 2014-02-02 [CancellationRemarks] => hfg [Status] => 1 [RevisionNumber] => 1 ) ) By any dyanamic model association or callback method it solve? suggest solution.

    Read the article

  • Using data input from pop-up page to current with partial refresh

    - by dpDesignz
    I'm building a product editor webpage using visual C#. I've got an image uploader popping up using fancybox, and I need to get the info from my fancybox once submitted to go back to the first page without clearing any info. I know I need to use ajax but how would I do it? <%@ Page Language="C#" AutoEventWireup="true" CodeFile="uploader.aspx.cs" Inherits="uploader" %> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head id="Head1" runat="server"> <title></title> </head> <body style="width:350px; height:70px;"> <form id="form1" runat="server"> <asp:ScriptManager ID="ScriptManager1" runat="server"> </asp:ScriptManager> <div> <div style="width:312px; height:20px; background-color:Gray; color:White; padding-left:8px; margin-bottom:4px; text-transform:uppercase; font-weight:bold;">Uploader</div> <asp:FileUpload id="fileUp" runat="server" /> <asp:Button runat="server" id="UploadButton" text="Upload" onclick="UploadButton_Click" /> <br /><asp:Label ID="txtFile" runat="server"></asp:Label> <div style="width:312px; height:15px; background-color:#CCCCCC; color:#4d4d4d; padding-right:8px; margin-top:4px; text-align:right; font-size:x-small;">Click upload to insert your image into your product</div> </div> </form> </body> </html> CS so far using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.Configuration; // Add to page using System.Web.UI; using System.Web.UI.WebControls; using System.Data; // Add to the page using System.Data.SqlClient; // Add to the page using System.Text; // Add to Page public partial class uploader : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void UploadButton_Click(object sender, EventArgs e) { if (fileUp.HasFile) try { fileUp.SaveAs("\\\\london\\users\\DP006\\Websites\\images\\" + fileUp.FileName); string imagePath = fileUp.PostedFile.FileName; } catch (Exception ex) { txtFile.Text = "ERROR: " + ex.Message.ToString(); } finally { } else { txtFile.Text = "You have not specified a file."; } } }

    Read the article

  • ??Data Guard???????Redo GAP

    - by JaneZhang(???)
      ?Oracle Data Guard?,Redo Gap??????????????????redo????????????,?????????redo??????????,?????????????:ARC:????MRP:Media Recovery Process,????????redoRFS:Remote File Server ,???????????redo??FAL:Fetch Archive Log????:?????????gap?,??????????gap?????:Oracle 11.2.0.2 on Linux 5.????:1.?????????????:Primary:MAX(SEQUENCE#)--------------           86Standby:MAX(SEQUENCE#)--------------           862. ??????,??gap:????????: #ifconfig eth0 down???????switch logfile:SQL>alter system switch logfile;SQL>alter system switch logfile;...Primary:MAX(SEQUENCE#)--------------           96????alert log?????????????:TNS-00513: Destination host unreachable   nt secondary err code: 101   nt OS err code: 0Error 12543 received logging on to the standbyFAL[server, ARCp]: Error 12543 creating remote archivelog file 'STANDBY'FAL[server, ARCp]: FAL archive failed, see trace file.ARCH: FAL archive failed. Archiver continuingORACLE Instance orcl - Archival Error. Archiver continuing.3.??????????????,????????????:mv *.arc ../4. ???????:#ifconfig eth0 up5.??,???ARC???????????????????MRP???gap??gap fetching.??alert log:Thu Mar 29 19:58:49 2012Media Recovery Waiting for thread 1 sequence 87 (in transit) <====  ?????,??87...Thu Mar 29 20:08:45 2012...Media Recovery Waiting for thread 1 sequence 94Thu Mar 29 20:11:01 2012RFS[61]: Assigned to RFS process 13643RFS[61]: Opened log for thread 1 sequence 97 dbid 1285401128 branch 757620395Archived Log entry 80 added for thread 1 sequence 97 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:11:02 2012RFS[62]: Assigned to RFS process 13645RFS[62]: Selected log 4 for thread 1 sequence 98 dbid 1285401128 branch 757620395Thu Mar 29 20:11:02 2012Primary database is in MAXIMUM PERFORMANCE modeRe-archiving standby log 4 thread 1 sequence 98Thu Mar 29 20:11:02 2012Archived Log entry 81 added for thread 1 sequence 98 ID 0x4c9d8928 dest 1:RFS[63]: Assigned to RFS process 13647RFS[63]: Selected log 4 for thread 1 sequence 99 dbid 1285401128 branch 757620395Thu Mar 29 20:11:05 2012Fetching gap sequence in thread 1, gap sequence 94-96 <===========?gap...6.??MRP?trace,?????MRP ??fetching gap:MRP trace:*** 2012-03-29 20:08:45.375 4265 krsh.cMedia Recovery Waiting for thread 1 sequence 94*** 2012-03-29 20:11:05.543*** 2012-03-29 20:11:05.543 4265 krsh.cFetching gap sequence in thread 1, gap sequence 94-96 <==========MRP?gap.Redo shipping client performing standby login*** 2012-03-29 20:11:05.593 4595 krsu.cLogged on to standby successfullyClient logon and security negotiation successful!7.????????????,???RFS????????, MRP ????????apply.Thu Mar 29 20:12:06 2012RFS[64]: Assigned to RFS process 13649RFS[64]: Opened log for thread 1 sequence 94 dbid 1285401128 branch 757620395Archived Log entry 82 added for thread 1 sequence 94 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:12:06 2012RFS[65]: Assigned to RFS process 13651RFS[65]: Opened log for thread 1 sequence 95 dbid 1285401128 branch 757620395Thu Mar 29 20:12:06 2012RFS[66]: Assigned to RFS process 13653RFS[66]: Opened log for thread 1 sequence 96 dbid 1285401128 branch 757620395Archived Log entry 83 added for thread 1 sequence 95 rlc 757620395 ID 0x4c9d8928 dest 2:Archived Log entry 84 added for thread 1 sequence 96 rlc 757620395 ID 0x4c9d8928 dest 2:Thu Mar 29 20:12:16 2012Media Recovery Log /home/oracle/arch1/standby/1_94_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_95_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_96_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_97_757620395.arcMedia Recovery Log /home/oracle/arch1/standby/1_98_757620395.arc????:????????,????gap???,???ARC?????????gap??,????????????MRP???apply log??????gap,???????FAL????? ?:?11g,??????ARC??????RFS?MRP?????????????gap. 8. ????????MRP??FAL??gap??,????????????,??MRP?trace???:FAL[client, MRP0],?????FAL??? *** 2012-03-29 21:18:15.964 4265 krsh.cError 1031 received logging on to the standby*** 2012-03-29 21:18:15.964 4265 krsh.cFAL[client, MRP0]: Error 1031 connecting to PRIMARY for fetching gap sequence

    Read the article

  • extreme slowness with a remote database in Drupal

    - by ceejayoz
    We're attempting to scale our Drupal installations up and have decided on some dedicated MySQL boxes. Unfortunately, we're running into extreme slowness when we attempt to use the remote DB - page load times go from ~200 milliseconds to 5-10 seconds. Latency between the servers is minimal - a tenth or two of a millisecond. PING 10.37.66.175 (10.37.66.175) 56(84) bytes of data. 64 bytes from 10.37.66.175: icmp_seq=1 ttl=64 time=0.145 ms 64 bytes from 10.37.66.175: icmp_seq=2 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=3 ttl=64 time=0.157 ms 64 bytes from 10.37.66.175: icmp_seq=4 ttl=64 time=0.144 ms 64 bytes from 10.37.66.175: icmp_seq=5 ttl=64 time=0.121 ms 64 bytes from 10.37.66.175: icmp_seq=6 ttl=64 time=0.122 ms 64 bytes from 10.37.66.175: icmp_seq=7 ttl=64 time=0.163 ms 64 bytes from 10.37.66.175: icmp_seq=8 ttl=64 time=0.115 ms 64 bytes from 10.37.66.175: icmp_seq=9 ttl=64 time=0.484 ms 64 bytes from 10.37.66.175: icmp_seq=10 ttl=64 time=0.156 ms --- 10.37.66.175 ping statistics --- 10 packets transmitted, 10 received, 0% packet loss, time 8998ms rtt min/avg/max/mdev = 0.115/0.176/0.484/0.104 ms Drupal's devel.module timers show the database queries aren't running any slower on the remote DB - about 150 microseconds whether it's the local or the remote server. Profiling with XHProf shows PHP execution times that aren't out of whack, either. Number of queries doesn't seem to make a difference - we seem the same 5-10 second delay whether a page has 12 queries or 250. Any suggestions about where I should start troubleshooting here? I'm quite confused.

    Read the article

  • Restarting Haproxy Gracefully

    - by Anand Gupta
    As per various blogs, HAproxy can be gracefully restarted using the following command: sudo haproxy -f /etc/haproxy/haproxy.cfg -p /var/run/haproxy.pid -sf $(cat /var/run/haproxy.pid) TO verify this, I had set up a apache bench script which contiguously sent message to haproxy. Ideally, whenever I restarted my server the script should not have an affect on the apache bunch execiton. But, it seems that whenever Haproxy is restarted apache bench scripts terminate and the connection to load balancer is lost. Here is the details of my HaProxy configuration file : global nbproc 4 log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon pidfile /var/run/haproxy.pid stats socket /home/ubuntu/haproxy.sock #debug #quiet defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webstats bind 0.0.0.0:1000 stats enable mode http stats uri /lb?stats stats auth anand:aaaaaaaa #stats refresh listen web-farm 0.0.0.0:80 mode http balance roundrobin option httpchk HEAD /index.php HTTP/1.0 server server2.com 1.1.1.1:80 server serve1.com 1.1.1.2:80 ~ Please let me know what am I missing here.

    Read the article

  • Citrix Performance monitoring

    - by Dr I
    Hi people, I has a strange thing which appears on my Citrix Farm today. My users are equiped with a Thin client Axel Model 80F, and today, one of them sustained a problem on it. He opened a citrix's Publish Desktop session (Host by a farm of Windows 2003 R2 SP2 Servers), he loaded Lotus Notes and a mail who contained an PDF attached file. Once he has opened his PDF File, his session has freezed. We've just reboot the Thin Client, and log in again on the session (which hasn't been closed during the process). Once we have log in again, we try to read the pdf and once again afer half a page the session freeze again (I can see the mouse moving on the screen but can make anything). Then I close the session, reboot correctly the thin client, and "Tada" with the same manipulationsn averything is correct and we don't facing any freeze. Well Now my question is: Is that bug came from the thin client or the server about you? I've checked on my farm and I don't have any alert from the Citrix's Monitoring console logs. According to me it's due to the Thin Client BUT I ddon't have enought monitoring tools to be sure of that. So do you have some quite godd monitoring tools or method? My config: Windows 2003 R2 SP2 Citrix Xenapp 5.0

    Read the article

  • Websphere - Performance Monitoring Infrastructure servlet login GET

    - by virtual-lab
    I am trying to make an http call to the WebSphere PMI servlet. Websphere has security enabled and therefore I am asked to enter user credentials in order to display the xml. What actually doesn't work as I expect is that username and password in the url are not recognized and the BASIC authorization form is displayed. Obviously it doesn't work from a third party application point of view, I need to pass those variables as GET request. Any suggestion?

    Read the article

  • PHP Configuration file won’t load IIS7 x64

    - by Martin Murphy
    using Fast CGI I can't get it to read the php.ini file. See my phpinfo below. System Windows NT WIN-PAFTBLXQWYW 6.0 build 6001 Build Date Mar 5 2009 19:43:24 Configure Command cscript /nologo configure.js "--enable-snapshot-build" "--enable-debug-pack" "--with-snapshot-template=d:\php-sdk\snap_5_2\vc6\x86\template" "--with-php-build=d:\php-sdk\snap_5_2\vc6\x86\php_build" "--disable-zts" "--disable-isapi" "--disable-nsapi" "--with-pdo-oci=D:\php-sdk\oracle\instantclient10\sdk,shared" "--with-oci8=D:\php-sdk\oracle\instantclient10\sdk,shared" "--enable-htscanner=shared" Server API CGI/FastCGI Virtual Directory Support disabled Configuration File (php.ini) Path C:\Windows Loaded Configuration File (none) Scan this dir for additional .ini files (none) My php.ini file is residing in both my c:\php and my c:\windows I've made sure it has read permissions in both places from network service. I've added various registry settings, Environment Variables and followed multiple tutorials found on the web and no dice as of yet. Rebooted after each. My original install was using MS "Web Platform Installer" however I have since rebooted. Any ideas on where to go from here would be most welcome.

    Read the article

  • High CPU from httpd process

    - by KHWeb
    I am currently getting high CPU on a server that is just running a couple of sites with very low traffic. One of the sites is in still development going live soon. However, this site is very very slow...When browsing through its pages I can see that the CPU goes from 30% to 100% for httpd (see top output below). I have tuned httpd & MySQL, Apache Solr, Tomcat for high performance, and I am using APC. Not sure what to do from here or how to find the culprit as I have a bunch of messages on the httpd log and have been chasing dead ends for some time...any help is greatly appreciated. Server: AuthenticAMD, Quad-Core AMD Opteron(tm) Processor 2352, RAM 16GB Linux 2.6.27 64-bit, Centos 5.5 Plesk 9.5.4, MySQL 5.1.48, PHP 5.2.17 Apache/2.2.3 (CentOS) DAV/2 mod_jk/1.2.15 mod_ssl/2.2.3 OpenSSL/0.9.8e-fips-rhel5 PHP/5.2.17 mod_perl/2.0.4 Perl/v5.8.8 Tomcat6-6.0.29-1.jpp5, Tomcat-native-1.1.20-1.el5, Apache Solr top 17595 apache 20 0 1825m 507m 10m R 100.4 3.2 0:17.50 httpd 17596 apache 20 0 1565m 247m 9936 R 83.1 1.5 0:10.86 httpd 17598 apache 20 0 1430m 110m 6472 S 54.5 0.7 0:08.66 httpd 17599 apache 20 0 1438m 124m 12m S 37.2 0.8 0:11.20 httpd 16197 mysql 20 0 13.0g 2.0g 5440 S 9.6 12.6 297:12.79 mysqld 17617 root 20 0 12748 1172 812 R 0.7 0.0 0:00.88 top 8169 tomcat 20 0 4613m 268m 6056 S 0.3 1.7 6:40.56 java httpd error_log [debug] prefork.c(991): AcceptMutex: sysvsem (default: sysvsem) [info] mod_fcgid: Process manager 17593 started [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17594 for worker proxy:reverse [debug] proxy_util.c(1967): proxy: initialized single connection worker 0 in child 17594 for (*) [debug] proxy_util.c(1854): proxy: grabbed scoreboard slot 0 in child 17595 for worker proxy:reverse [debug] proxy_util.c(1873): proxy: worker proxy:reverse already initialized [notice] child pid 22782 exit signal Segmentation fault (11) [error] (43)Identifier removed: apr_global_mutex_lock(jk_log_lock) failed [debug] util_ldap.c(2021): LDAP merging Shared Cache conf: shm=0x7fd29a5478c0 rmm=0x7fd29a547918 for VHOST: example.com [info] APR LDAP: Built with OpenLDAP LDAP SDK [info] LDAP: SSL support available [info] Init: Seeding PRNG with 256 bytes of entropy [info] Init: Generating temporary RSA private keys (512/1024 bits) [info] Init: Generating temporary DH parameters (512/1024 bits) [debug] ssl_scache_shmcb.c(374): shmcb_init allocated 512000 bytes of shared memory [debug] ssl_scache_shmcb.c(554): entered shmcb_init_memory() [debug] ssl_scache_shmcb.c(576): for 512000 bytes, recommending 4265 indexes [debug] ssl_scache_shmcb.c(619): shmcb_init_memory choices follow [debug] ssl_scache_shmcb.c(621): division_mask = 0x1F [debug] ssl_scache_shmcb.c(623): division_offset = 96 [debug] ssl_scache_shmcb.c(625): division_size = 15997 [debug] ssl_scache_shmcb.c(627): queue_size = 2136 [debug] ssl_scache_shmcb.c(629): index_num = 133 [debug] ssl_scache_shmcb.c(631): index_offset = 8 [debug] ssl_scache_shmcb.c(633): index_size = 16 [debug] ssl_scache_shmcb.c(635): cache_data_offset = 8 [debug] ssl_scache_shmcb.c(637): cache_data_size = 13853 [debug] ssl_scache_shmcb.c(650): leaving shmcb_init_memory()

    Read the article

  • Process runs slower as a scheduled task than it does interactively

    - by Charlie
    I have a scheduled task which is very CPU- and IO-intensive, and takes about four hours to run (building source code, if you're curious). The task is a Powershell script which spawns various sub-processes to do its work. When I run the same process interactively from a Powershell prompt, as the same user account, it runs in about two and a half hours. The task is running on Windows Server 2008 R2. What I want to know is why it takes so much longer to run as a scheduled task - more than an hour longer. One thing I noticed is that the task scheduler runs at Below-Normal priority, so when my task starts, it inherits the same lowered priority. However, I've updated the script to set the Powershell process priority back to Normal, and it still takes just as long. Anybody have an idea what could be different between the two scenarios? I've ruled out differences in processor and IO load - this task is the only thing the system is used for, so there's nothing else running that could be competing for resources.

    Read the article

  • getting bash to load my PATH over SSH

    - by Eli Bendersky
    This problem comes up with me trying to make svnserve (Subversion server) available on a server through SSH. I compiled SVN and installed it in $HOME/bin. Local access to it (not through SSH) works fine. Connections to svn+ssh fail due to: bash: svnserve: command not found Debugging this, I've found that: ssh user@server "which svnserve" says: which: no svnserve in (/usr/bin:/bin) This is strange, because I've updated the path to $HOME/bin in my .bashrc, and also added it in ~/.ssh/environment. However, it seems like the SSH doesn't read it. Although when I run: ssh user@server "echo $PATH" It does print my updated path! What's going on here? How can I make SSH find my svnserve? Thanks in advance

    Read the article

  • Poor home office network performance and cannot figure out where the issue is

    - by Jeff Willener
    This is the most bizarre issue. I have worked with small to mid size networks for quite a long time and can say I'm comfortable connecting hardware. Where you will start to lose me is with managed switches and firewalls. To start, let me describe my network (sigh, shouldn't but I MUST solve this). 1) Comcast Cable Internet 2) Motorola SURFboard eXtreme Cable Modem. a) Model: SB6120 b) DOCSIS 3.0 and 2.0 support c) IPv4 and IPv6 support 3-A) Cisco Small Business RV220W Wireless N Firewall a) Latest firmware b) Model: RV220W-A-K9-NA c) WAN Port to Modem (2) d) vlan 1: work e) vlan 2: everything else. 3-B) D-Link DIR-615 Draft 802.11 N Wireless Router a) Latest firmware b) WAN Port to Modem (2) 4) Servers connected directly to firewall a) If firewall 3-A, then vlan 1 b) CAT5e patch cables c) Dell PowerEdge 1400SC w/ 10/100 integrated NIC (Domain Controller, DNS, former DHCP) d) Dell PowerEdge 400SC w/ 10/100/1000 integrated NIC (VMWare Server) 4) Linksys EZXS88W unmanaged Workgroup 10/100 Switch a) If firewall 3-A, then vlan 2 b) 25' CAT5e patch cable to firewall (3-A or 3-B) c) Connects xBox 360, Blu-Ray player, PC at TV 5) Office equipment connected directly to firewall a) If firewall 3-A, then vlan 1 b) ~80' CAT6 or CAT5e patch cable to firewall (3-A or 3-B) c) Connects 1) Dell Latitude laptop 10/100/1000 2) Dell Inspiron laptop 10/100 3) Dell Workstation 10/100/1000 (Pristine host, VMWare Workstation 7.x with many bridged VM's) 4) Brother Laser Printer 10/100 5) Epson All-In-One Workforce 310 10/100 5-A) NetGear FS116 unmanaged 10/100 switch a) I've had this switch for a long time and never had issues. 5-B) NetGear GS108 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-C) Linksys SE2500 unmanaged 10/100/1000 switch a) Bought new for this issue and returned. 5-D) TP-Link TL-SG10008D unmanaged 10/100/1000 a) Bought new for this issue and still have. 6) VLan 1 Wireless Connections (on same subnet if 3-B) a) Any of those at 5c b) HP Laptop 7) VLan 2 Wireless Connection (on same subnet if 3-B) a) IPad, IPod b) Compaq Laptop c) Epson Wireless Printer Shew, without hosting a diagram I hope that paints a good picture. The Issue The breakdown here is at item 5. No matter what I do I cannot have a switch at 5 and have to run everything wireless regardless of router. Issues related to using a switch (point 5 above) SpeedTest is good. Poor throughput to other devices if can communicate at all. Usually cannot ping other devices even on the same switch although, when able, ping times are good. Eventual lose of connectivity and can "sometimes" be restored by unplugging everything for several days, not minutes or hours but we're talking a week if at all. Directly connect to computer gives good internet connection however throughput to other devices connected to firewall is at best horrible. Yet printing doesn't seem to be an issue as long as they are connected via wireless. I have to force the RV220W to 1000Mb on the respective port if using a Gig Switch Issues related to using wireless in place of a switch (point 5 above) Poor throughput to other devices if can communicate. SpeedTest is good. Bottom line Internet speeds are awesome. By the way, Comcast went WAY above and beyond to make sure it was not them. They rewired EVERYTHING which did solve internet drops. Computer to computer connections are garbage Cannot get switch at 5 to work, yet other at 4 has never had an issue. Direct connection, bypass switch, is good for DHCP and internet. DNS must be on server, not firewall. Cisco insists its my switches but as you can see I have used four and two different cables with the same result. My gut feeling is something is happening with routing. But I'm not smart enough to know that answer. I run a lot of VM's at 5-c-3, could that cause it? What's different compared to my previous house is I have introduced Gigabit hardware (firewall/switches/computers). Some of my computers might have IPv6 turned on if I haven't turned it off already. I'm truly at a loss and hope anyone has some crazy idea how to solve this. Bottom line, I need a switch in my office behind the firewall. I've changed everything. The real crux is I will find a working solution and, again, after days it will stop working. So this means I cannot isolate if its a computer since I have to use them. Oh and a solution is not throwing more money at this. I'm well into $1k already. Yah, lame.

    Read the article

  • IPCop server slows down download speed

    - by noocyte
    I have an IPCop server running at home, been doing just fine for ~5 months, but last week I suddenly started getting time-outs and slow downloads from the 'net. I first thought that this was my ISP acting up, then I thought it might be one of my 3 switches or some of my cabling. In due order I've tested everything above and found them all to be working as they should. The only factor remaining is my IPCop server. Facts: I've got a 15/15 Mbit line (fiber) and I get ~15 Mbit upload, but only 0.5 Mbit download with the IPCop box as router (ISP router set in bridge mode). If I connect without the IPCop box (using the ISP router) I get ~12 Mbit upload and ~15 Mbit download. The load on the IPCop box appears to be light and it used to handle this traffic just fine 2 weeks ago. The memory usage is ~60%, I tried to restart it and test again, the memory fell to ~50% then (5 months of uptime). I'm thinking that one of my nics are busted, but I'm sort of perplexed that this could be the outcome; slow download but full speed upload. Anybody ever seen that happening before? Could it just be one of the nics that needs to be replaced? Will try that as soon as I can get my hands on a couple of new ones.

    Read the article

  • nginx can't load images,css,js

    - by EquinoX
    When I point to a URL in nginx where it has images extension such as: http://50.56.81.42/phpMyAdmin/themes/original/img/logo_right.png (as example) it gives me the 404 error as it can't find the file, but the file is actually there. What is potentially wrong? UPDATE: Here's the error log that I was able to pull out: 2011/02/27 05:53:29 [error] 18679#0: *225 open() "/usr/local/nginx/html/phpMyAdmin/js/mooRainbow/mooRainbow.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/js/mooRainbow/mooRainbow.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *226 open() "/usr/local/nginx/html/phpMyAdmin/print.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/print.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *228 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/logo_right.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/logo_right.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *223 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/b_help.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/b_help.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *227 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/s_warn.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/s_warn.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:53:29 [error] 18679#0: *227 open() "/usr/local/nginx/html/phpMyAdmin/favicon.ico" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/favicon.ico HTTP/1.1", host: "50.56.81.42" 2011/02/27 05:54:39 [error] 18679#0: *237 open() "/usr/local/nginx/html/phpMyAdmin/print.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/print.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *235 open() "/usr/local/nginx/html/phpMyAdmin/js/mooRainbow/mooRainbow.css" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/js/mooRainbow/mooRainbow.css HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *238 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/logo_right.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/logo_right.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *239 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/b_help.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/b_help.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *233 open() "/usr/local/nginx/html/phpMyAdmin/themes/original/img/s_warn.png" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/themes/original/img/s_warn.png HTTP/1.1", host: "50.56.81.42", referrer: "http://50.56.81.42/phpMyAdmin/main.php" 2011/02/27 05:54:39 [error] 18679#0: *233 open() "/usr/local/nginx/html/phpMyAdmin/favicon.ico" failed (2: No such file or directory), client: 70.176.18.156, server: localhost, request: "GET /phpMyAdmin/favicon.ico HTTP/1.1", host: "50.56.81.42" Here's my nginx.conf file, in case I am missing something: #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ \.(js|css|png|jpg|jpeg|gif|ico|html)$ { expires max; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/nginx/html; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/nginx/html$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one location ~ /\.ht { deny all; } } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} } What does this mean? It can't pull out the .css, etc....

    Read the article

  • coordinating a script to run on only one of identical load-balanced servers

    - by Amos Shapira
    I have two identically configured CentOS 5 servers (possibly more in the future). I need to run a cron job on any one of them and that it'll run only on one of them. I know about RedHat Cluster Suite (we use it on other servers), but it's a too big a gun to use for this task, plus it doesn't really behave well for less than three nodes. Is there anything light-weight I can use for that? The servers can communicate with each other directly. I suppose I can develope something over ssh or nrpe (two server which are already installed on these servers), but I was wondering whether there is something already available.

    Read the article

  • apache performance improvements and maxclients

    - by updog
    I know this has been asked a few (thousand) times around the internet but I was hoping someone whose in the know might be able to comment on my particular setup. I have a web server hosting one site (php/codeigniter) with a wordpress blog in a sub directory. The server has 2GB RAM, 3GHz CPU and I have offloaded the static assets to CloudFlare which is has reduced bandwidth for the actual server by almost 75%. The problem I have is when an email campaign is sent out that links to the site or blog, it slows down. Below is my settings in apache2.conf. Average apache process size is 80M and there is 1.5GB available for apache. <IfModule mpm_prefork_module> StartServers 8 MinSpareServers 5 MaxSpareServers 20 MaxClients 20 MaxRequestsPerChild 2000 </IfModule> I have already setup and installed apc and built some caching into the site and used w3totalcache on the blog. The number of concurrent users is around 2-300 when there is a campaign, are there any further optimisations before

    Read the article

  • Poor NFS Performance: OpenFiler

    - by Safin09
    Good Day Everyone, I have an issue with OpenFiler, a Linux-based operating that converts a computer system into a SAN/NAS appliance. Here is the problem. In my environment we have two Netapp Storevault 500 appliances that I normally perform backups to a NFS share. There are two backup cronjobs that use ghettoVCB to backup two groups of VM's. One group is a pool of 3 VMs. This takes 13 mins to complete. A second job that backups a pool of 5 VMs to a 2nd Storevault appliance which takes 2 hours. We then installed Openfiler on a old server that has 2 core Xeon processors. There is a software RAID 5 process in place. When performing the same backups to a NFS Openfiler share, the first backup job, which takes 13 mins, takes around 4 hours. The second backup job, which takes 2 hours, takes almost 10 hours to complete. This is unacceptable!!!! Especially considering the strain placed on the host ESX Server. I assumed that because of the software RAID 5, the overhead on the CPU explained the long backup times. I then installed Openfiler on a 2nd server, an IBM x306 machine which has a P4 Intel processor. This time no software RAID or any RAID at all. A single 750GB hard drive that contained the OS and the rest of the disk uses to backup VMs to a NFS share. I performed the first backup job of the pool of 3 VMs. This time the backup job took 1 and 1/2 hours to complete instead of 13 mins!!!!!!!!!! Is Openfiler simply poor at being an NFS Server!!!!!!!!!!!!! Has anyone else had these issues with Openfiler?

    Read the article

  • High mysql server load, sar output

    - by eric
    I have a MySQL Server that should be performing better than it seems to be. We're running ubuntu on a Amazon Cluster Compute (cc1.4xlarge) Linux ip-10-0-1-60 3.2.0-25-virtual #40-Ubuntu SMP Wed May 23 22:20:17 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux Distributor ID: Ubuntu Description: Ubuntu 12.04 LTS Release: 12.04 Codename: precise I have several output files from sar that i'm not really sure how to interpret. For example, I ran: # Individual block device I/O activities sar -d 1 180 > logs/block_device_io.log & which gave me what looks like really high utilisation of my disk (turns out this block device maps to /dev/xvdh on /var/lib/mysql type ext4 (rw,_netdev) The output from my log: 10:48:59 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 10:49:00 PM dev202-16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev202-32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev8-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 10:49:00 PM dev202-112 1008.00 31040.00 1416.00 32.20 1.02 1.01 0.89 90.00 10:49:00 PM dev202-80 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Am I wrong in thinking this is a problem? I have it above 90% almost the entire time we're seeing slowness. Or does this just mean MySQL is doing what it's supposed to do?

    Read the article

  • Clicking a link in IE6 doesn't load page (internal DNS entry on our intranet)

    - by Callum
    I have a very strange problem that is only affecting some versions of IE6. The problem does affect IE 6.0.2900.5512, but does not seem to affect 6.0.3790.3959 Basically I work for a company and we have an intranet. While I'm not an expert on "internal DNS pointers", what I was able to do was create a website (let's say about football), and when an employee who is sitting behind the company firewall types the word "football" in to the web address bar of their web browser, they get redirected to a particular server. I am told this is some kind "internally pointing DNS entry". So, I've set one of these up, and I have a placed a link to it on our company intranet page. However, when the link is clicked in IE6.0.2900.5512, the page goes blank. Clicking "refresh" then loads the correct page (the one specified in the link). Can anyone help me out here. I have tried changing the way URL is formed, everything from //football to http://football/ etc. The link works fine in every other browser and IE7+, but unfoturnatly, IE6 is still the most common browser in use at my organisation.

    Read the article

  • nginx+mysql5 loadtesting configuration strangeness

    - by genseric
    i am trying to setup a new server running on debian6 and trying to make it work smooth under load. i ve used a wordpress site as a test object, and tried the configurations on http://blitz.io. when i increase the mysql max_connections from 50 to 200 lots of timeouts start to occur. but on 50 , no timeouts and pretty well response times. nginx configuration is fine , i tuned the config so i dont see errors. so i presume it's related to the other configuration options of my.cnf . i read some about options but still cant find what max_connections problem is all about. btw, the server has 16gb of ram and a fine i7 cpu. here is the current my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] wait_timeout=60 connect_timeout=10 interactive_timeout=120 user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp language = /usr/share/mysql/english skip-external-locking bind-address = 127.0.0.1 key_buffer = 384M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 20 myisam-recover = BACKUP max_connections = 50 table_cache = 1024 thread_concurrency = 8 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M thanks in advance. i asked this question on SO but it's closed as off topic so i believe this is a SF question.

    Read the article

  • Windows 7 won't load unless other harddrives "disconnect"ed in UEFI shell

    - by lmz
    I have three disks, one GPT partitioned containing Windows 7 and Debian, the other MBR partitioned containing CentOS, and the other one MBR partitioned, empty. It used to work (loading Windows boot manager using rEFIt) but now after installing CentOS and OpenIndiana on the second drive, Windows won't boot. The logo is displayed briefly and then a text mode scrollbar "Loading files", then back to the rEFIt menu. The only thing that makes it work is if I drop into the UEFI shell and run disconnect XX where XX is the device handle of the other hard drives (obtained from running devices). This makes me think that the bootloader is getting confused about where the Windows partition is. Is there any information on how the Windows UEFI boot loader finds the Windows partition, or is there any logging I can turn on to help troubleshoot this issue?

    Read the article

  • load a php page with a cron job

    - by s2xi
    I am using a cron job to reload my httpd service after a subdomain is created. I have the problem that when the reload happens the page that registers the user throws a server error. I was wondering if I could go around this by having another cron task. So my logic would be: httpd reload after a .conf file is created then take the user back to the DocumentRoot of the main page. So in usage it would be: a user registers, then is automatically taken back to domain.com

    Read the article

  • Wi-Fi performance in Windows 8 RP on a MacBook Air (mid 2011)

    - by Steven Lu
    I was able to install the Boot Camp Windows software using the executable that it provided, and there are no unrecognized or unknown devices in Device Manager. Wi-Fi works but it seems to be limited to an extremely slow 1.5Mbits. Network Center reports an 802.11n connection (at 65Mbps usually) but transfers never reach above about 200kB/s. Being limited to 1/20th of the connection speed of my internet service is quite frustrating. Does anybody experience the same issue? I have been trying to identify the Broadcom Wi-Fi chipset and a driver that I could try to upgrade to but I have made very little progress on Google on this front.

    Read the article

  • I used disk copy to clone my drive, now my windows 7 profile won't load correctly

    - by RzK
    I used easeuse disk copy, after acronis, clonezilla, windows image restore failed me. Basically it copys all sectors, I set it to skip bad sectors(40). The source drive works, it just gave me a couple errors and stopped booting at one point. The drive is an identical copy, minus 40 bad errors. The drive is set to C and active partition, I rebuilt the boot order. I've ran sfc /scannow and ran chkdsk /r chkdsk found 20kb of bad sectors if I remember right. Now the issue I get is when I log into my profile which was saved right, I get a blank light blue wallpaper (non-license) explorer.exe is not running, and there are only 4 processes running in taskmanager, including taskmanager. I would try a repair install but CRTL-E would not open, nothing will open once I force start explorer.exe, almost like all services are down. What should I do? Fresh install is almost not a possibility I will try and fix this issue. sfc /scannow /offbootdir=c:\ /offwindir=c:\windows returns "Windows Resource Protection could not perform the requested operation"

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >