Search Results

Search found 9963 results on 399 pages for 'special commands'.

Page 297/399 | < Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >

  • Setting my NIC to full duplex

    - by David
    I am trying to optimize the network speed of my Solaris X86 server, and have discovered that the Cisco 3548 that it is connected to has issues with the NIC in my server. The NIC appears to have not been configured fully, and is coming up 100 half-duplex. The 3548 ports are all set to 100 full. Ideally I'd like to have the server set for 100 full, and have been attempting to configure it using ndd commands. However I have had no results. The following command: -bash-3.00# dladm show-dev rtls0 link: unknown speed: 100 Mbps duplex: unknown The NIC shows up as: pci bus 0x0001 cardnum 0x06 function 0x00: vendor 0x10ec device 0x8139 Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ which should be configurable. I have modified the configuration file from auto config (5) to 100 fdx (4) to no avail. If there is no other choice, I could alter the Cisco 3548 to be 100 half-duplex. However, this solution causes huge performance loss. Currently throughput is about 500Kbps, when it should be around 40Mbps.

    Read the article

  • RemoteApp shows no certificate available but RD Session host finds it fine

    - by Scott Chamberlain
    I am trying to set up remote app for a internal domain. I have a Root CA that is trusted my all of the end computers, that cert has signed a wildcard cert I am trying to use for the server. I added the pfx of the wildcard cert to the local machine personal store. From there I can use it fine for signing the RD Session Host session. However when I try to set up the signature for Remote App the certificate does not show up. What do I need to do to get my certificate to be available for for use? UPDATE: The Certificate was generated through the following commands: makecert -pe -n "CN=*.vw.local" -a sha1 -sky signature -ic VetWebCA.cer -iv VetWebCA.pvk -sv VetWebComputerWildcard.pvk VetWebComputerWildcard.cer pvk2pfx -pvk VetWebComputerWildcard.pvk -spc VetWebComputerWildcard.cer -pfx VetWebComputerWildcard.pfx The resultant pfx was added to the machine local store via mmc. Oddly, going in to Powershell if I add the -CodeSigningCert flag to find the wildcard certificate it is excluded from the serch results for Get-Childitem in my Cert:\Local Machine\My path, but if I don't include it it is there.

    Read the article

  • Run command remotely on Windows computer from C#

    - by Bilal Aslam
    I have a Windows Server 2008 instance on Amazon EC2 (Amazon's cloud compute platform, which provides VMs in the cloud). It has an external IP, and I have an admin account on the box. I would like to 'bootstrap' this instance remotely i.e. I want to run commands to download, install and configure apps on it, all without having to log on even once. Also, I cannot use psexec on the source computer. I have figured out how to do this to a remote, domain-joined computer using WMI. However, I have NOT been able to do for a remote computer on EC2. Here are some specific restrictions: 1) The remote computer is not part of my domain, hence no Kerberos 2) The remote computer does not have a cert I trust, or vice versa I am sure I am running into to some auth/trust restriction. Is there any way I can run a single command on the remote, given that I have admin privileges? I'm not tied down to using WMI, but I do need to run a command somehow. Feels like this should be a solved problem.

    Read the article

  • installing lots of perl modules

    - by Colin Pickard
    Hi, I've been landed with the job of documenting how to install a very complicated application onto a clean server. Part of the application requires a lot of perl scripts, each of which seem to require lots of different perl modules. I don't know much about perl, and I only know one way to install the required modules. This means my documentation now looks this: Type each of these commands and accept all the defaults: sudo perl -MCPAN -e 'install JSON' sudo perl -MCPAN -e 'install Date::Simple' sudo perl -MCPAN -e 'install Log::Log4perl' sudo perl -MCPAN -e 'install Email::Simple' (.... continues for 2 more pages... ) Is there any way I can do all this one line like I can with aptitude i.e. Type the following command and go get a coffee: sudo aptitude install openssh-server libapache2-mod-perl2 build-essential ... Thank you (on behalf of the long suffering people who will be reading my document) EDIT: The best way to do this is to use the packaged versions. For the modules which were not packaged for Ubuntu 10.10 I ended up with a little perl script which I found here ) #!/usr/bin/perl -w use CPANPLUS; use strict; CPANPLUS::Backend->new( conf => { prereqs => 1 } )->install( modules => [ qw( Date::Simple File::Slurp LWP::Simple MIME::Base64 MIME::Parser MIME::QuotedPrint ) ] ); This means I can put a nice one liner in my document: sudo perl installmodules.pl

    Read the article

  • Pre-startup segmentation fault with ptrace

    - by sfink
    I have somehow managed to mangle my computer so that any time I attempt to use something that uses ptrace to trace another process (eg strace, gdb), I get an immediate segmentation fault. For example: # strace /bin/true execve("/bin/true", ["/bin/true"], [/* 27 vars */]) = 0 --- SIGSEGV (Segmentation fault) @ 0 (0) --- +++ killed by SIGSEGV +++ or with gdb: # gdb /bin/true GNU gdb Fedora (6.8-27.el5) Copyright (C) 2008 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "x86_64-redhat-linux-gnu"... (no debugging symbols found) (gdb) run Starting program: /bin/true Program terminated with signal SIGSEGV, Segmentation fault. The program no longer exists. You can't do that without a process to debug. rpm -V comes up clean on strace, gdb, and glibc. I do not have any LD_* variables set, and PATH has nothing special in it.

    Read the article

  • FTP Server on Centos 5.8 - Transfer fails randomly

    - by Diego
    Hi have ProFTPD runningon a brand new CentOS 5.8 server with Plesk, and its behaviour is inconsistent at best. I tried to transfer a directory from my PC, and every time I get a transfer failed on a random file. It's never the same one that fails, it just fails. Sometimes it's a .gif, sometimes it's a .css, sometimes it's a JPG. Of several hundred files, a dozen is always failing for no apparent reason. The error that I get is the following: COMMAND:> [27/11/2012 11:43:52] STOR main_border.gif [27/11/2012 11:43:53] 500 Invalid command: try being more creative ERROR:> [27/11/2012 11:43:53] Syntax error: command unrecognized. The above is just an example, the "command unrecognized" occurs with LIST and other commands as well. Here's the ProFTPD configuration, just in case: ServerName "ProFTPD" #ServerType standalone ServerType inetd DefaultServer on <Global> DefaultRoot ~ psacln AllowOverwrite on </Global> DefaultTransferMode binary UseFtpUsers on TimesGMT off SetEnv TZ :/etc/localtime Port 21 Umask 022 MaxInstances 30 ScoreboardFile /var/run/proftpd/scoreboard TransferLog /usr/local/psa/var/log/xferlog #Change default group for new files and directories in vhosts dir to psacln <Directory /var/www/vhosts> GroupOwner psacln </Directory> # Enable PAM authentication AuthPAM on AuthPAMConfig proftpd IdentLookups off UseReverseDNS off AuthGroupFile /etc/group Include /etc/proftpd.include Note: file /etc/proftpd.include is blank. The above is the default configuration set by Plesk 11. I don't know much of why is that way, my knowledge of Linux System Administration is very basic and the one of ProFTPD is a complete zero. Thanks in advance for the help. Update Issue experienced with CuteFTP and FileZilla. Update Replaced ProFTPd with PureFTPd, issue persists. Sometimes I get "command unrecognized", sometimes "failed to establish data connection". I'm starting to think that it could be a network issue, but I have completely zero knowledge of networking.

    Read the article

  • How do I force a server to leave a SharePoint farm

    - by Stefan
    I have two web servers in a SharePoint (WSS 3.0) farm with one database server for the config and content databases. I already moved my content databases to a new database server successfully. But when I tried to move the sharepoint config database using the "stsadm deleteconfigdb" and "stsadm setconfigdb" commands, one of my servers got stuck in an intermediate state. I was able to join one of the web servers with the config database on the new server, but the other server is not able to join because it believes it is already part of the farm (which it used to be, before the move). On the central administration it says the status of the services on the server is "stopping". Even after rebooting all servers involved, uninstalling SharePoint and what not, this status does not change, and because of it, I am not able to join the second server with the new config database. I get random error messages when trying to join the farm. I believe that if I can unstuck this server, it will be able to join the farm again. The farm believes the second server is already part of it, but the web server itself knows its not. Any ideas on how to forcefully kick out a server from the farm?

    Read the article

  • probems using ssh from cron

    - by Travis
    I am attempting to automate a script that executes commands on remote machines via ssh. I have public key authentication setup between the machines using ssh-agent. The script runs fine when executed from the command prompt. I suspect my problem is that cron isn't starting the ssh-agent due to it's minimalist environment. Here is the output when I add the -v flag to ssh: debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: gssapi-with-mic debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: Next authentication method: publickey debug1: Offering public key: /home/<user>/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 149 debug1: PEM_read_PrivateKey failed debug1: read PEM private key done: type <unknown> debug1: Trying private key: /home/<user>/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password Permission denied, please try again. debug1: Authentications that can continue: publickey,gssapi-with-mic,password debug1: No more authentication methods to try. Permission denied (publickey,gssapi-with-mic,password). How can I make this work? Thanks!

    Read the article

  • Force local IP traffic to an external interface

    - by calandoa
    I have a machine with several interfaces that I can configure as I want, for instance: eth1: 192.168.1.1 eth2: 192.168.2.2 I would like to forward all the traffic sent to one of these local addresses through the other interface. For instance, all requests to an iperf, ftp, http server at 192.168.1.1 should be not just routed internally, but forwarded through eth2 (and the external network will take care of re-routing the packet to eth1). I tried and looked at several commands, like iptables, ip route, etc... but nothing worked. The closest behavior I could get was done with: ip route change to 192.168.1.1/24 dev eth2 which send all 192.168.1.x on eth2, except for 192.168.1.1 which is still routed internally. May be I could then do NAT forwarding of all traffic directed to fake 192.168.1.2 on eth1, rerouted to 192.168.1.1 internally? I am actually struggling with iptables, but it is too tough for me. The goal of this setup is to do interface driver testing without using two PCs. I am using Linux, but if you know how to do that with Windows, I'll buy it!

    Read the article

  • virtualisation with kvm: export services from guest to the host

    - by ascobol
    Hello, I would like to export some services from the guest os to the host os, via kvm, and by the same way learn some things about networking. I have tried the following commands: In the host (kubuntu 10.4): $ sudo tunctl -u ascobol Set 'tap0' persistent and owned by uid 2401 $ sudo ifconfig tap0 192.168.2.1 netmask 255.255.255.0 broadcast 192.168.2.255 The ifconfig command returns: $ /sbin/ifconfig tap0 Link encap:Ethernet HWaddr 3e:4e:e3:cc:bc:92 inet addr:192.168.2.1 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::3c4e:e3ff:fecc:bc92/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:17 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) $ route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.2.0 0.0.0.0 255.255.255.0 U 0 0 0 tap0 Then I run the virtual machine (ubuntu server 10.4): $ sudo kvm -hda ubuntuserver104.qcow2 -net nic -net tap,name=tap0,script=no (I'm using sudo because without it fails with the following message:) warning: could not configure /dev/net/tun: no virtual network emulation With sudo the virtual machine boots, I just get this message: pci_add_option_rom: failed to find romfile "pxe-rtl8139.bin" In the virtual machine: $ ifconfig eth0 192.168.2.2 netmask 255.255.255.0 broadcast 192.168.2.255 Now if I run: $ ssh 192.168.2.2 I just get a No route to host What is wrong with this setup ? Thanks !

    Read the article

  • Multiple Servers + One MailServer

    - by theomega
    Hy, I got several Linux-Servers (running Debian) where different services run: Database-Servers, Webservers, Applicationsservers, Tools and so on. All Servers are connected to the same internal network. There is also one special Server which is the Mail-Server: All Mailaccounts are stored on this server, it is also the outbound Mailserver for all the other servers. I want all Mails for all servers to get saved on the Mailserver. For example if an cron-job fails on one of the web-servers the mail should not be delivered to the local user but instead to the Mailserver so I get a centralized place for mail storage. How do you set up this scenario? My current setup is: Using postfix as MTA on the Mailserver and using ssmtp on all the other servers. SSMTP is configured to send the mails to the Mailserver. The Mailserver is configured to allow the whole internal network to relay mails using itself. Is this the right way to choose? I also thought about setting up a MTA (postfix) on every server and configure it somehow to forward the mails. What would be the advantage of this solution?

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • Monitoring multiple sites on a single server using OpsView

    - by Kev
    We have several web servers. On each of these servers there can be ~250 web sites. I need to add a HTTP check for each site on each server. Each site has a reserved host header that we know can always be resolved in the format of: w10000.hostchecks.mycompany.com w10020.hostchecks.mycompany.com w11992.hostchecks.mycompany.com ..and so on.. What I want is for there to be a master ping check on the web server's main IP address and then separate HTTP checks for each of the sites on the server. If the master ping test fails then I want the HTTP tests to cease until the master ping check goes OK. I had a stab at this and tried do the following: Create a parent host that does a ping check on the server's main ip address (e.g. server is named WEB0001). For each of the sites that reside on WEB0001: Create a separate Host with a Primary Hostname of wXXXXX.hostchecks.mycompany.com Make WEB0001 the parent host Add a monitor (HTTP check to a special url that is mapped into each site using a virtual directory: H- $HOSTADDRESS$ -u /__hostcheck/IsAlive.aspx -w 5 -c 10 -p 80 However I find that if I down the parent server (WEB0001) the http checks seem to continue. Am I going about this completely the wrong way?

    Read the article

  • Transferring 'Live' Documents to Another Computer

    - by waiwai933
    I was wondering if there was any OS/Application that has some support for transferring a document to another computer without having to save, transfer and then reopen. Basically, is there a way so that if I'm working on my desktop, I can click a button (or something similar) and then have the exact state of that computer/application transferred to another? For example, if I'm writing a document, is there a way to get it to computer B without saving it, putting the file on my flash drive, and having to reopen it? Edit: I just realized that this is possible through the wonderful phenomena known as cloud computing, but this is not the type of solution I'm looking for. Edit 2: I wanted to clarify: By 'save', I meant that I didn't want to have to save it to a special location, be that a (flash) drive or uploading to the web. Saving to the local hard drive is fine (and probably necessary, since technologies such as Bluetooth require the file to be saved somewhere). This is a bit inspired by a scene in Avatar, so I highly doubt that this actually exists... but if it does, I don't want to miss out.

    Read the article

  • CPU / Affinity mask problem in SQL 2005

    - by Robert Moir
    Hi folks, Having a problem with a SQL Server which was virtualised. The CPU mask was set on the physical host for some reason and now advanced options are not available. So I need to reconfigure the CPU affinity mask settings - which are advanced options, so this is blocked because of the affinity mask issue. I've tried doing this from the SQL server in single user command line mode, I've googled and found lots of people with similar problems but no real solution. I'm stumped. Any ideas? Sample commands and output from query analyser below. sp_configure 'show advanced options', 1 GO RECONFIGURE WITH OVERRIDE GO sp_configure 'affinity mask', 0x00000000 GO RECONFIGURE GO ----------------------------------------- Configuration option 'show advanced options' changed from 0 to 1. Run the RECONFIGURE statement to install. Msg 5832, Level 16, State 1, Line 1 The affinity mask specified does not match the CPU mask on this system. Msg 15123, Level 16, State 1, Procedure sp_configure, Line 51 The configuration option 'affinity mask' does not exist, or it may be an advanced option.

    Read the article

  • How to fix display on external Samsung Syncmaster shifted to the right when connected to Macbook Pro?

    - by joe larson
    Is there something special I need to do to be able to use external LCD displays with my new MacBook Pro? Do I need extra software, or do I possibly need a different cable? I'm attempting to use an external display with my MBP. I've got a "Mini DisplayPort to VGA Female Adapter for Mac", plugged into the thunderbolt port on my MBP, which I understood should be compatible with thunderbolt. I've tried this with three different SyncMaster models: a B2330 (21.5"), a EX2220 (22"), and a third (also 22" ish) which I don't have the model # for -- but all are 1920x1080 resolution; plus an additional HP monitor of similar size and resolution. In all four cases, the MBP recognizes the screen and choses the correct resolution. However, the display is shifted over about 1 inch. This is true no matter if I change screen resolutions also. The controls on the monitor for horizontal position don't help. Also, sometimes (especially if I drag an app over into the second screen), the screen starts skipping left to right and having bands of fuzz. Additionally, the monitor will periodically blink off for a moment, trying to switch from Digital to Analog and back (the Syncmaster shows text on the screen to tell you it's trying to do this). Often when it comes back from one of these blank-outs, it will show OK (no skipping or fuzz) but still shifted right; then after a few seconds it will go wrong again skipping and fuzzy. This photo shows the worst of it. I've added red rectangles to show the physical edge of the screen, and a yellow rectangle to show the empty space on the left of the screen. (Sorry for the awful quality and lighting!) Also, it's worth noting I am on Mac OS X 10.6.7, and yes I have this update 1.4 installed.

    Read the article

  • Java web app deployment and ControlTier adoption

    - by Ran
    I've been searching for a configuration and deployment manager tool for my java-linux based web service and have been looking mainly at ControlTier (http://controltier.org). We operate at a medium scale (100's of hosts, multi-DC, dozens of services). There seem to be be plenty of lower level system admin tools such as chef, puppet, cfengine, bcfg2 and more and my understanding and the reason I'm calling them "low level" is that they are great for system level administration tasks such as setting up a mount, file permissions, users etc but aren't designed, for example for java deployments, which usually come with a build process and special java semantics. In many cases any tool can be used to do anything but if it was not designed for the task it can get uncomfortable. OTOH control-tier seem to have been designed just for that - java application deployments, at least that's what all the tutorials on their site demonstrate but here's the problem - The wiki at http://controltier.org/wiki/ is pretty good and stuffed with examples and the company behind the open source CT product is very responsive (pushy...) however, I'm yet to have seen any material from 3rd party users on the net. No success stories, no detailed blog posts, no best practices, no cheat sheets, not even hate letters, nothing. This plays badly for DTO solutions, CT's sponsor for two reasons, one is that it makes me suspicious what's the reason for the poor adoption? and second, what do I do if I get stuck and there's no help page on CT's wiki page and the mailing list is too slow to answer. I'm stuck with a "free" product that a consultancy company is pushing. So my question here - I'd be interested in hearing if anyone has had real world experience with CT for java based web app deployments and if he'd thumb up the product? Any other comments that may enlighten me are welcome of course...

    Read the article

  • Why apache throws 403 on index file after install?

    - by den-javamaniac
    Hi. I've just installed apache and php from sources using next commands: ./configure --prefix="/mnt/workspace/servers/web/apache-2.2.17" \ --enable-info --enable-rewrite --enable-usertrack --enable-mime-magic for apache and ./configure --with-apxs2=/mnt/workspace/servers/web/apache-2.2.17/bin/apxs \ --prefix=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-config-file-path=/mnt/workspace/servers/web/apache-2.2.17/php \ --with-mysql=mysqlnd for php. After adjusting configuration (httpd.conf) and starting apache it gives a 403 response on http://localhost:8060/index.html (presuming that 8060 is used) request. There are next directory settings in httpd.conf: <Directory "/mnt/workspace/servers/web/apache-2.2.17/htdocs"> ... Order allow,deny Allow from all ... </Directory> <IfModule dir_module> DirectoryIndex index.html index.php </IfModule> It should be noted that I've got apache on a mounted (default auto mount configured while installing ubuntu) partition. Log Files Access log: ::1 - - [12/Feb/2011:17:48:30 +0200] "GET / HTTP/1.1" 403 202 ::1 - - [12/Feb/2011:17:48:31 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:48:48 +0200] "GET /favicon.ico HTTP/1.1" 403 213 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /index.html HTTP/1.1" 403 212 ::1 - - [12/Feb/2011:17:49:03 +0200] "GET /favicon.ico HTTP/1.1" 403 213 Error log: [Sat Feb 12 18:59:13 2011] [notice] Apache/2.2.17 (Unix) PHP/5.3.5 configured -- resuming normal operations [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to / denied [Sat Feb 12 18:59:22 2011] [error] [client ::1] (13)Permission denied: access to /favicon.ico denied [Sat Feb 12 18:59:36 2011] [error] [client ::1] (13)Permission denied: access to /index.html denied

    Read the article

  • Slow performance of MySQL database on one server and fast on another one, with similar configurations

    - by Alon_A
    We have a web application that run on two servers of GoDaddy. We experince slow preformance on our production server, although it has stronger hardware then the testing one, and it is dedicated. I'll start with the configurations. Testing: CentOS Linux 5.8, Linux 2.6.18-028stab101.1 on i686 Intel(R) Xeon(R) CPU L5609 @ 1.87GHz, 8 cores 60 GB total, 6.03 GB used Apache/2.2.3 (CentOS) MySQL 5.5.21-log PHP Version 5.3.15 Production: CentOS Linux 6.2, Linux 2.6.18-028stab101.1 on x86_64 Intel(R) Xeon(R) CPU L5410 @ 2.33GHz, 8 cores 120 GB total, 2.12 GB used Apache/2.2.15 (CentOS) MySQL 5.5.27-log - MySQL Community Server (GPL) by Remi PHP Version 5.3.15 We are running the same code on both servers. The Problem We have some function that executes ~30000 PDO-exec commands. On our testing server it takes about 1.5-2 minutes to complete and our production server it can take more then 15 minutes to complete. As you can see here, from qcachegrind: Researching the problem, we've checked the live graphs on phpMyAdmin and discovered that the MySQL server on our testing server was preforming at steady level of 1000 execution statements per 2 seconds, while the slow production MySQL server was only 250 executions statements per 2 seconds and not steady at all, jumping from 0 to 250 every seconds. You can clearly see it in the graphs: Testing server: Production server: You can see here the comparison between both of the configuration of the MySQL servers.Left is the fast testing and right is the slow production. The differences are highlighted, but I cant find anything that can cause such a behavior difference, as the configs are mostly the same. Maybe you can see something that I cant see. Note that our tables are all InnoDB, so the MyISAM difference is (probably) not relevant. Maybe it is the MySQL Community Server (GPL) that is installed on the production server that can cause the slow performance? Or maybe it needs to be configured differently for 64bit ? I'm currently out of ideas...

    Read the article

  • How to set up multiple SSIDs with bandwidth limiting on a single wireless router?

    - by Rahul Narain
    I have an Asus WL-520GU wireless router connected to a cable modem that I use for wireless internet access in my apartment. I would like to set it up so that it provides two SSIDs: one secured and password-protected for my regular use, and a "guest" SSID that's unsecured but throttled to, say, 10% of the available bandwidth. What is the most straightforward way to do this? I've been looking into DD-WRT and Tomato, both of which support my router. DD-WRT supports setting up multiple SSIDs using the GUI, but I don't know if it's possible to limit the bandwidth of each SSID independently; point #12 in this FAQ thread says it's not possible to limit by day or by MAC address, which is discouraging but not conclusive. Tomato allows bandwidth limits in its QoS settings, going by the screenshot here, but multiple SSID support is still experimental and it doesn't look like it will work with the encryption settings or bandwidth limits in the GUI. I'd like to know a good way to do this which gives me the fewest opportunities for screwing up. I'm no stranger to the command line, if that turns out to be what's necessary, but if so, please explain what the commands are doing because I don't have a good mental model of what needs to happen to set this up.

    Read the article

  • AVCHD MTS h264 1080p file with choppy playback in Linux

    - by marc
    When I'm trying play video files from my camera: Seems stream 0 codec frame rate differs from container frame rate: 50.00 (50/1) -> 50.00 (50/1) Input #0, mpegts, from '00027.MTS': Duration: 00:00:38.88, start: 2.884289, bitrate: 16945 kb/s Program 1 Stream #0.0[0x1011]: Video: h264 (High), yuv420p, 1920x1080 [PAR 1:1 DAR 16:9], 50 fps, 50 tbr, 90k tbn, 50 tbc Stream #0.1[0x1100]: Audio: ac3, 48000 Hz, stereo, s16, 256 kb/s … on my Linux computer (Ubuntu 12.04), I get choppy playback. It's completly unusable... I tried: Totem VLC mplayer The result is always same issue. I sent the same video file to a friend who has ubuntu 10.04 to test, and he also has the same issue. He has Windows 7, and confirms that on Windows, the video work well. I have an Intel® Core™2 CPU 6300 @ 1.86GHz × 2 with GF 9600 GT, with closed NVIDIA drivers. This is not any kind of issue with big files playing slow from an HDD issue. I have an SSD drive! I spent the last days and nights, trying hundreds of commands for ffmpeg, handbrake, mencoder... Any of them won't let me create a file with enough quality. I downloaded few movies from YouTube in 1080p, and playback worked well without any big pixels and choppiness. I would like have highest possible quality, I will put following files onto a Blu-ray disk so I don't need to compress them to get a smaller size. I just want smoth playback on my Linux box. On Windows, the same file is working well.

    Read the article

  • Permission / owner issue with pushing to git when editing directly from repo?

    - by Susan
    I have a web interface for deploying scripts from our repo at Github to our live server. The web interface just triggers a bash script with some git commands. If I make changes locally, push to repo, then run the bash script to pull from repo to live it works fine. However, if I make changes directly in the repo (via Github's web interface), I'm running into fast-forward / lock issues. These are the steps I'm taking: Make a change on a file at Github repo Run a bash script (as apache) via web from live server that attempts a git push / pull. Get these problems: PUSH To [email protected]:name/name.git ! [rejected] master - master (non-fast-forward) error: failed to push some refs to '[email protected]:name/name.git' To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. PULL From github.com:name/name branch master - FETCH_HEAD error: unable to unlink old 'includes/footer.inc' (Permission denied) Updating 8f6d922..d1eba9d Updating 8f6d922..d1eba9d SSH in as root, attempt a push / pull and it works fine. Ideas on why would this method not work from apache?

    Read the article

  • How do I remove a USB drive's write protection?

    - by nate
    I have a SanDisk Cruser Blade USB stick that suddenly seems to be write protected. I tried running DiskPart but after I write the command "attributes disk clear readonly" it displays this: Microsoft DiskPart version 5.1.3565 ADD - Add a mirror to a simple volume. ACTIVE - Marks the current basic partition as an active boot partition. ASSIGN - Assign a drive letter or mount point to the selected volume. BREAK - Break a mirror set. CLEAN - Clear the configuration information, or all information, off the disk. CONVERT - Converts between different disk formats. CREATE - Create a volume or partition. DELETE - Delete an object. DETAIL - Provide details about an object. EXIT - Exit DiskPart EXTEND - Extend a volume. HELP - Prints a list of commands. IMPORT - Imports a disk group. LIST - Prints out a list of objects. INACTIVE - Marks the current basic partition as an inactive partition. ONLINE - Online a disk that is currently marked as offline. REM - Does nothing. Used to comment scripts. REMOVE - Remove a drive letter or mount point assignment. REPAIR - Repair a RAID-5 volume. RESCAN - Rescan the computer looking for disks and volumes. RETAIN - Place a retainer partition under a simple volume. SELECT - Move the focus to an object. It's like when you type help at the DiskPart prompt, so how do I get past this? This problem started when I plugged the stick into a laptop which had viruses, if that's any help.

    Read the article

  • Split big Apache log to folder structure

    - by Dough
    I just changed my Apache log behavior because it was making me having very BIG files... So I now use cronolog to split my logs to log/httpd/2012/11/access_2012.11.30.log for exemple, pattern : %Y/%m/access_%Y.%m.%d.log I now want to split my old 42GB file to the same structure but really don't know how to do that efficiently. I tried some simple commands with cat, egrep, awk... but really don't know how to handle all that in a more powerful script. Here is how the log looks like : x.x.237.134 - - [08/Apr/2011:14:43:09 +0200] "GET... x.x.50.15 - - [08/Apr/2011:14:43:09 +0200] "GET... [...] x.x.254.19 - - [28/Feb/2012:15:24:48 +0100] "GET... So I need for yeah line to get : year %Y (ex. 2012) month %m (ex. 11) day %d And to push out the entire line to : %Y/%m/access_%Y.%m.%d.log Can someone give me clues to get that working ? Thanks a lot for your interest.

    Read the article

  • IIS URl Rewrite working inconsistently?

    - by Don Jones
    I'm having some oddness with the URL rewriting in IIS 7. Here's my Web.config (below). You'll see "imported rule 3," which grabs attempts to access /sitemap.xml and redirects them to /sitemap/index. That rule works great. Right below it is imported rule 4, which grabs attempts to access /wlwmanifest.xml and redirects them to /mwapi/wlwmanifest. That rule does NOT work. (BTW, I do know it's "rewriting" not "redirecting" - that's what I want). So... why would two identically-configured rules not work the same way? Order makes no different; Imported Rule 4 doesn't work even if it's in the first position. Thanks for any advice! EDIT: Let me represent the rules in .htaccess format so they don't get eaten :) RewriteEngine On # skip existing files and folders RewriteCond %{REQUEST_FILENAME} -s [OR] RewriteCond %{REQUEST_FILENAME} -l [OR] RewriteCond %{REQUEST_FILENAME} -d RewriteRule ^.*$ - [NC,L] # get special XML files RewriteRule ^(.*)sitemap.xml$ /sitemap/index [NC] RewriteRule ^(.*)wlwmanifest.xml$ /mwapi/index [NC] # send everything to index RewriteRule ^.*$ index.php [NC,L] The "sitemap" rewrite rule works fine; the 'wlwmanifest' rule returns a "not found." Weird.

    Read the article

< Previous Page | 293 294 295 296 297 298 299 300 301 302 303 304  | Next Page >