Search Results

Search found 13225 results on 529 pages for 'dynamic variables'.

Page 206/529 | < Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >

  • What paths are guaranteed to exist on Windows Server 2008 R2?

    - by jpmc26
    What paths are guaranteed to exist on a Windows Server 2008 R2 instance? A client is requiring that some instructions specify exact paths in all cases. (The person executing said instructions is not supposed to have to decide on any path themselves, even when the path makes absolutely no difference.) So I need to know what paths I can rely on to be there. It's fine with me if they involve environment variables, but they need to be variables guaranteed to hold an existing path. (That is, no modification to a path that doesn't exist possible.) Or are there no guaranteed paths?

    Read the article

  • Need help on awk/sed/ perl pattern with regex / grep

    - by Jayakumar K
    Sample file output from grep file1:my $dbh = DBI->connect("dbi:mysql:$database_name", $DB_USER, $DB_PASSWD) file2:($dbc,$rc) = mysql_connect($mysql_host,$mysql_user,$mysql_password); The awk pattern should get values databasename, DB_USER And DB_PASSWD from line 1 and mysql_host,mysql_user and mysql_password from line 2 i.e all variables inside the function. Then it should search for the declaration of that variable in file before : (semicolon) ex: databasename in file1 may be $databasename = "dbweb" ; ex: mysql_user in file2 may be $mysql_user="root" ; Result: It should display variable declarations of all 6 variables along with filenames file2:$mysql_host = "db1"; file2:$mysql_user = "root"; file1:$DB_USER = 'user';

    Read the article

  • PHP-FPM Pool, Child Processes and Memory Consumption

    - by Jhilke Dai
    In my PHP-FPM configuration I have 3 Pools, the eg: Config is: ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 1 ; ;;;;;;;;;;;;;;;;;;;;;;; [www1] user = www group = www listen = /tmp/php-fpm1.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 2 ; ;;;;;;;;;;;;;;;;;;;;;;; [www2] user = www group = www listen = /tmp/php-fpm2.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 ;;;;;;;;;;;;;;;;;;;;;;; ; Pool 3 ; ;;;;;;;;;;;;;;;;;;;;;;; [www3] user = www group = www listen = /tmp/php-fpm3.sock; listen.backlog = -1 listen.owner = www listen.group = www listen.mode = 0666 pm = dynamic pm.max_children = 40 pm.start_servers = 6 pm.min_spare_servers = 6 pm.max_spare_servers = 12 pm.max_requests = 250 slowlog = /var/log/php/$pool.log.slow request_slowlog_timeout = 5s request_terminate_timeout = 120s rlimit_files = 131072 I calculated the pm.max_children processes according to some example calculations on the web like 40 x 40 Mb = 1600 Mb. I have separated 4 GB of RAM for PHP, now according to the calculations 40 Child Processes via one socket, and I have total of 3 sockets in my Nginx and FPM configuration. My doubt is about the amount of memory consumption by those child processes. I tried to create high load in the server via httperf hog and siege but I could not calculate the accurate memory usage by all the PHP processes (other processes like MySQL and Nginx were also running). And all the sockets were in use, So, I seek guidance from anyone who have done this before or know how exactly the pm.max_children in PHP Works. Since I have 3 Pools/sockets with 40 child processes does that count to 3 x 40 x 40 Mb of Memory usage ? or it is just like 40 Max. Child processes sharing 3 sockets (and the total memory usage is just 40 x 40 Mb) ?

    Read the article

  • Choosing my first Domain Registrar?

    - by user36914
    This will be the first domain i've ever registered. So i'm at a loss what to look for. I definitely don't want to go with GoDaddy. Here are my requirements: Must have unlimited email forwards for my domain Easy to transfer away if i choose. Must not be one of those shady registrars that will try to auction your domain at the end. Ability to create sub domains Domain Registration is Private I would like a domain registrar that would let me use my dynamic ip of my ISP (Cable) if i want to. So hopefully they would have some type program that would detect IP changes and update accordingly So i've looked at a variety of registrars so far. The three left were really NameCheap, DreamHost, & DomainMonster. I have heard good things about DreamHost but i think its off the list because they don't give you any information about the features you get when you register your domain with them. They have a "Whats included" button the page but it mainly list the features with hosting not registration. DomainMonster looks pretty cool but i don't see anything about subdomains. Also i would assume they don't have a system for dynamic ip address updating. So you would have to constantly check that your ip of your ISP has changed or not and update it manually. NameCheap also looks nice. There are two things i really like about them. Right on their feature page they list "Free Dynamic DNS With Client" which is pretty cool. They also have a free SSL certificate for the first year. Haven't messed a lot with certificates but this would definitely be something i would use. Only minus i can see is you only get free private whois for the first year. After that its $2.99 which isn't that big of a deal. I'm leaning towards NameCheap now. Is this a good choice. Is there anything else i should be looking at?

    Read the article

  • Cisco IOS ACL types

    - by cjavapro
    The built in command help list displays access list types based on which range. router1(config)#access-list ? <1-99> IP standard access list <100-199> IP extended access list <1100-1199> Extended 48-bit MAC address access list <1300-1999> IP standard access list (expanded range) <200-299> Protocol type-code access list <2000-2699> IP extended access list (expanded range) <700-799> 48-bit MAC address access list dynamic-extended Extend the dynamic ACL absolute timer rate-limit Simple rate-limit specific access list router1(config)# What are each of the types? Can multiple types of ACLs be applied to a given interface?

    Read the article

  • "This computer has dynamically assigned IP addresses" error when installing Active Directory Domain Controller

    - by smhnaji
    This is a working Windows Server 2008 that I should install Active Directory on it. I found http://www.howtogeek.com/99323/ and followed the steps. After Additional Domain Controller Options, I'm asked the question "This computer has dynamically assigned IP addresses". As I see, the message states that Dynamic IP addressing has been used for the server, while this is wrong. When I come to Network And Sharing Center, and click on Local Area Connections - Properties - Internet Protocol Version 4 (TCP/IPv4) - Properties, I see that the main IP address (as well as DNS Server) and also all other IP addresses are assigned statically. So it should be OK. I cannot believe any server using dynamic IP(s)! Note: No IPv6 has been set for the server. Please tell me why the error is given and which of the options available, should I choose? Note that it's a production server and is working with many users in WORKGROUP. No change should be affected nor to the IPs, neither to users connecting to the server.

    Read the article

  • Monitoring System for the cloud?

    - by Maxim Veksler
    I need a monitoring system, much like ganglia / nagios that is build for the cloud. I need it to support : Adding / removing nodes dynamically. (Node shuts down, dose not imply node failure...) Dynamic node based categorization, meaning node can identify them self as being part of group X (ganglia gets this almost right, but lacks the dynamic part...) Does not require multicast support (generally not allowed in cloud based setups) Plugins for recent cool stuff such as Hadoop, Cassandra, Mongo would be cool. More features include: External API, web interface and co. I've looked at Ganglia, munin and they both seem be almost there (but not exactly). I would also go for reasonably priced Software as Service solution. I'm currently doing research, so Suggestions are highly appreciated. Thank you, Maxim

    Read the article

  • faster ( squid + apache httpd + apache tomcat )

    - by letronje
    We have a production setup where we have Squid in the front(caching images, js, css, etc) Apache httpd in the middle(prefork + mod_rewrite + mod_jk/AJP + mod_deflate + mod_php(few php pages)) Apache tomcat 5.5 at the end serving all the dynamic stuff. What would be the best way to reduce the overhead of having 3 servers in the request path ? Wondering if replacing httpd with a faster web server like nginx/lighttpd will help. httpd right now does the job of url rewriting(for clean urls) and talking to tomcat(via mod_jk) and compressing output(mod_deflate) and serving some low traffic php pages. What would be ideal replacement for httpd given that we need these features? Is there a way to replace (squid + apache) with a single entity that does caching well (like squid) for static stuff, rewrites url, compresses response and forwards dynamic stuff directly to tomcat ? heard abt varnish cache, wondering if it can help.

    Read the article

  • Using a script that uses Duplicity + S3 excluding large files

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if I copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Can't access shared drive when connecting over VPN

    - by evolvd
    I can ping all network devices but it doesn't seem that DNS is resolving their hostnames. ipconfig/ all is showing that I am pointing to the correct dns server. I can "ping "dnsname"" and it will resolve but it wont resolve any other names. Split tunnel is set up so outside DNS is resolving fine So one issue might be DNS but I have the IP address of the server share so I figure I could just get to it that way. example: \10.0.0.1\ well I can't get to it that way either and I get "the specified network name is no longer available" I can ping it but I can't open the share. Below is the ASA config : ASA Version 8.2(1) ! hostname KG-ASA domain-name example.com names ! interface Vlan1 nameif inside security-level 100 ip address 10.0.0.253 255.255.255.0 ! interface Vlan2 nameif outside security-level 0 ip address dhcp setroute ! interface Ethernet0/0 switchport access vlan 2 ! interface Ethernet0/1 ! interface Ethernet0/2 ! interface Ethernet0/3 ! interface Ethernet0/4 ! interface Ethernet0/5 ! interface Ethernet0/6 ! interface Ethernet0/7 ! ftp mode passive clock timezone EST -5 clock summer-time EDT recurring dns domain-lookup outside dns server-group DefaultDNS name-server 10.0.0.101 domain-name blah.com access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 10000 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 8333 access-list OUTSIDE_IN extended permit tcp any host 10.0.0.253 eq 902 access-list SPLIT-TUNNEL-VPN standard permit 10.0.0.0 255.0.0.0 access-list NONAT extended permit ip 10.0.0.0 255.255.255.0 10.0.1.0 255.255.255.0 pager lines 24 logging asdm informational mtu inside 1500 mtu outside 1500 ip local pool IPSECVPN-POOL 10.0.1.2-10.0.1.50 mask 255.255.255.0 icmp unreachable rate-limit 1 burst-size 1 asdm image disk0:/asdm-621.bin no asdm history enable arp timeout 14400 global (outside) 1 interface nat (inside) 0 access-list NONAT nat (inside) 1 0.0.0.0 0.0.0.0 static (inside,outside) tcp interface 10000 10.0.0.101 10000 netmask 255.255.255.255 static (inside,outside) tcp interface 8333 10.0.0.101 8333 netmask 255.255.255.255 static (inside,outside) tcp interface 902 10.0.0.101 902 netmask 255.255.255.255 timeout xlate 3:00:00 timeout conn 1:00:00 half-closed 0:10:00 udp 0:02:00 icmp 0:00:02 timeout sunrpc 0:10:00 h323 0:05:00 h225 1:00:00 mgcp 0:05:00 mgcp-pat 0:05:00 timeout sip 0:30:00 sip_media 0:02:00 sip-invite 0:03:00 sip-disconnect 0:02:00 timeout sip-provisional-media 0:02:00 uauth 0:05:00 absolute timeout tcp-proxy-reassembly 0:01:00 dynamic-access-policy-record DfltAccessPolicy aaa authentication enable console LOCAL aaa authentication http console LOCAL aaa authentication serial console LOCAL aaa authentication ssh console LOCAL aaa authentication telnet console LOCAL http server enable http 10.0.0.0 255.255.0.0 inside http 0.0.0.0 0.0.0.0 outside no snmp-server location no snmp-server contact snmp-server enable traps snmp authentication linkup linkdown coldstart crypto ipsec transform-set myset esp-aes esp-sha-hmac crypto ipsec transform-set ESP-3DES-SHA esp-3des esp-sha-hmac crypto ipsec security-association lifetime seconds 28800 crypto ipsec security-association lifetime kilobytes 4608000 crypto dynamic-map dynmap 1 set transform-set myset crypto dynamic-map dynmap 1 set reverse-route crypto map IPSEC-MAP 65535 ipsec-isakmp dynamic dynmap crypto map IPSEC-MAP interface outside crypto isakmp enable outside crypto isakmp policy 10 authentication pre-share encryption 3des hash sha group 2 lifetime 86400 crypto isakmp policy 65535 authentication pre-share encryption aes hash sha group 2 lifetime 86400 telnet 0.0.0.0 0.0.0.0 inside telnet timeout 5 ssh 0.0.0.0 0.0.0.0 inside ssh 70.60.228.0 255.255.255.0 outside ssh 74.102.150.0 255.255.254.0 outside ssh 74.122.164.0 255.255.252.0 outside ssh timeout 5 console timeout 0 dhcpd dns 10.0.0.101 dhcpd lease 7200 dhcpd domain blah.com ! dhcpd address 10.0.0.110-10.0.0.170 inside dhcpd enable inside ! threat-detection basic-threat threat-detection statistics access-list no threat-detection statistics tcp-intercept ntp server 63.111.165.21 webvpn enable outside svc image disk0:/anyconnect-win-2.4.1012-k9.pkg 1 svc enable group-policy EASYVPN internal group-policy EASYVPN attributes dns-server value 10.0.0.101 vpn-tunnel-protocol IPSec l2tp-ipsec svc webvpn split-tunnel-policy tunnelspecified split-tunnel-network-list value SPLIT-TUNNEL-VPN ! tunnel-group client type remote-access tunnel-group client general-attributes address-pool (inside) IPSECVPN-POOL address-pool IPSECVPN-POOL default-group-policy EASYVPN dhcp-server 10.0.0.253 tunnel-group client ipsec-attributes pre-shared-key * tunnel-group CLIENTVPN type ipsec-l2l tunnel-group CLIENTVPN ipsec-attributes pre-shared-key * ! class-map inspection_default match default-inspection-traffic ! ! policy-map global_policy class inspection_default inspect icmp ! service-policy global_policy global prompt hostname context I'm not sure where I should go next with troubleshooting nslookup result: Default Server: blahname.blah.lan Address: 10.0.0.101

    Read the article

  • Backup script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. My script gives the proper command, but when run within the script it outputs an an error. However if the same command is run manually everything works...??? Here is the script based on one easy found with google #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="7743E14E" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= If run I recieve the error; Command line error: Expected 2 args, got 6 Enter 'duplicity --help' for help screen. Any help your could offer would be greatly appreciated.

    Read the article

  • Unknown error sourcing a script containing 'typeset -r' wrapped in command substitution

    - by Bernard Assaf
    I wish to source a script, print the value of a variable this script defines, and then have this value be assigned to a variable on the command line with command substitution wrapping the source/print commands. This works on ksh88 but not on ksh93 and I am wondering why. $ cat typeset_err.ksh #!/bin/ksh unset _typeset_var typeset -i -r _typeset_var=1 DIR=init # this is the variable I want to print When run on ksh88 (in this case, an AIX 6.1 box), the output is as follows: $ A=$(. ./typeset_err.ksh; print $DIR) $ echo $A init When run on ksh93 (in this case, a Linux machine), the output is as follows: $ A=$(. ./typeset_err.ksh; print $DIR) -ksh: _typeset_var: is read only $ print $A ($A is undefined) The above is just an example script. The actual thing I wish to accomplish is to source a script that sets values to many variables, so that I can print just one of its values, e.g. $DIR, and have $A equal that value. I do not know in advance the value of $DIR, but I need to copy files to $DIR during execution of a different batch script. Therefore the idea I had was to source the script in order to define its variables, print the one I wanted, then have that print's output be assigned to another variable via $(...) syntax. Admittedly a bit of a hack, but I don't want to source the entire sub-script in the batch script's environment because I only need one of its variables. The typeset -r code in the beginning is the error. The script I'm sourcing contains this in order to provide a semaphore of sorts--to prevent the script from being sourced more than once in the environment. (There is an if statement in the real script that checks for _typeset_var = 1, and exits if it is already set.) So I know I can take this out and get $DIR to print fine, but the constraints of the problem include keeping the typeset -i -r. In the example script I put an unset in first, to ensure _typeset_var isn't already defined. By the way I do know that it is not possible to unset a typeset -r variable, according to ksh93's man page for ksh. There are ways to code around this error. The favorite now is to not use typeset, but just set the semaphore without typeset (e.g. _typeset_var=1), but the error with the code as-is remains as a curiosity to me, and I want to see if anyone can explain why this is happening. By the way, another idea I abandoned was to grep the variable I need out of its containing script, then print that one variable for $A to be set to; however, the variable ($DIR in the example above) might be set to another variable's value (e.g. DIR=$dom/init), and that other variable might be defined earlier in the script; therefore, I need to source the entire script to make sure I all variables are defined so that $DIR is correctly defined when sourcing.

    Read the article

  • Problem with script that excludes large files using Duplicity and Amazon S3

    - by Jason
    I'm trying to write an backup script that will exclude files over a certain size. If i run the script duplicity gives an error. However if i copy and paste the same command generated by the script everything works... Here is the script #!/bin/bash # Export some ENV variables so you don't have to type anything export AWS_ACCESS_KEY_ID="accesskey" export AWS_SECRET_ACCESS_KEY="secretaccesskey" export PASSPHRASE="password" SOURCE=/home/ DEST=s3+http://s3bucket GPG_KEY="gpgkey" # exclude files over 100MB exclude () { find /home/jason -size +100M \ | while read FILE; do echo -n " --exclude " echo -n \'**${FILE##/*/}\' | sed 's/\ /\\ /g' #Replace whitespace with "\ " done } echo "Using Command" echo "duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST" duplicity --encrypt-key=$GPG_KEY --sign-key=$GPG_KEY `exclude` $SOURCE $DEST # Reset the ENV variables. export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= export PASSPHRASE= When the script is run I get the error; Command line error: Expected 2 args, got 6 Where am i going wrong??

    Read the article

  • Configure TCP/IP to use DHCP and a Static IP Address at the Same Time

    - by Tiago
    My computer is configured to obtain a IP address automatically using DHCP. It only has one network adapter. How to configure an additional static IP address? I found a tutorial for Windows XP, but the procedure didn't work for Windows 7. Is it possible to configure two IP addresses on Windows 7, one being static e another being dynamic? How? The dynamic address I got now is 10.17.11.162. The static IP is 10.17.30.19. The network mask is the same: 255.255.224.0. Both work independently, but I don't know how to use both at the same time.

    Read the article

  • SMTP hacked by spammer using base64 encoding to authenticate

    - by Throlkim
    Over the past day we've detected someone from China using our server to send spam email. It's very likely that he's using a weak username/password to access our SMTP server, but the problem is that he appears to be using base64 encoding to prevent us from finding out which account he's using. Here's an example from the maillog: May 5 05:52:15 195396-app3 smtp_auth: SMTP connect from (null)@193.14.55.59.broad.gz.jx.dynamic.163data.com.cn [59.55.14.193] May 5 05:52:15 195396-app3 smtp_auth: smtp_auth: SMTP user info : logged in from (null)@193.14.55.59.broad.gz.jx.dynamic.163data.com.cn [59.55.14.193] Is there any way to detect which account it is that he's using?

    Read the article

  • Web service not accessible from behind corporates firewalls - how come?

    - by Niro
    We run a Saas serving a widget which is embedded in customer websites. The service include static javascript code hosted on amazon S3 and dynamic part hosted on EC2 with Scalr (using scalr name servers). We received some feedback from users behind corporate firewalls that they cant access our service (while they can access the sites including the widget). This does not make sense to me since the service is using normal http calls on port 80 and our URL is quite new without any reason to be banned by firewalls. My questions are: 1. Why is the service is not accessible and what can I do about it? 2. Is it possible that one of the following is blocked by corporate firewalls: Amazon s3, the dynamic IP address provided by amazon, Scalr name servers. Any other possible reasons, way to check them and remedies for this? Thanks!

    Read the article

  • /etc/profile.d and "ssh -t"

    - by petersohn
    I wanted to run a script on a remote machine. The simple solution is this: ssh remote1 some-script This works until the remote script doesn't want to connect to another remote machine (remote2) which requires interactive authentication, like tis one (remote2 is only reachable through remote1 in this case): ssh remote1 "ssh remote2 some-script" The solution for the problem is to use the -t option for ssh. ssh -t remote1 "ssh remote2 some-script" This works, but I get probems in case I use this (where some-script may execute further ssh commands): ssh -t remote1 some-script I found that some environment variables are not set which are set when I don't use the -t option. These envrionment variables are set in scripts from /etc/profile.d. I guess that these scripts are not run for some reason if using the -t option, but are run if I don't use it. What's the reason of this? Is there any way to work around it? I am using SUSE linux (version 10).

    Read the article

  • How to automatically resume php-fpm?

    - by alfish
    I am using nginx+php-fpm on Debian Squeeze for a busy server and have had great difficulty to deal with maximum connections being reached. Here the problem is that php processes sometimes just die randomly under high load and leave the server with no php process. Then I need to manually restart php5-fpm service to bring back the server to life. I am wondering how to avoid this to happen, or at least treat the symptoms by restarting the php5-fpm automatically whenever there is not php process left to listen to incoming requests. My relevant configs are: pm = dynamic pm.max_children = 1400 pm.start_servers = 10 pm.max_spare_servers = 20 pm.process_idle_timeout = 1s; #not sure it will be useful when pm=dynamic pm.max_requests = 100000 request_terminate_timeout = 30 I appreciate your suggestions to cope with this nasty problem.

    Read the article

  • Maximum execution time of 300 seconds exceeded error while importing large MySQL database

    - by Spacedust
    I'm trying to import 641 MB MySQL database with a command: mysql -u root -p ddamiane_fakty < domenyin_damian_fakty.sql but I got an error: ERROR 1064 (42000) at line 2351406: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '<br /> <b>Fatal error</b>: Maximum execution time of 300 seconds exceeded in <b' at line 253 However limits are set much higher: mysql> show global variables like "interactive_timeout"; +---------------------+-------+ | Variable_name | Value | +---------------------+-------+ | interactive_timeout | 28800 | +---------------------+-------+ 1 row in set (0.00 sec) and mysql> show global variables like "wait_timeout"; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | wait_timeout | 28800 | +---------------+-------+ 1 row in set (0.00 sec)

    Read the article

  • How to is MySQL's "net_buffer_length" config: viewed and reset?

    - by blunders
    Attempt to see the "net_buffer_length" config before resetting it: mysql> show variables like "net_buffer_length"; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | net_buffer_length | 16384 | +-------------------+-------+ 1 row in set (0.00 sec) Attempt to reset "net_buffer_length" config: mysql> set global net_buffer_length=1000000; Query OK, 0 rows affected (0.00 sec) Attempt to confirm the "net_buffer_length" config has been reset: mysql> show variables like "net_buffer_length"; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | net_buffer_length | 16384 | +-------------------+-------+ 1 row in set (0.00 sec) What's wrong with the commands I'm using that result in the config not updating? MySQL Server Version: 5.1.53-community DATABASE_ENGINE: INNOdb Questions, feedback, requests -- just comment, thanks!

    Read the article

  • Languages and VMs: Features that are hard to optimize and why

    - by mrjoltcola
    I'm doing a survey of features in preparation for a research project. Name a mainstream language or language feature that is hard to optimize, and why the feature is or isn't worth the price paid, or instead, just debunk my theories below with anecdotal evidence. Before anyone flags this as subjective, I am asking for specific examples of languages or features, and ideas for optimization of these features, or important features that I haven't considered. Also, any references to implementations that prove my theories right or wrong. Top on my list of hard to optimize features and my theories (some of my theories are untested and are based on thought experiments): 1) Runtime method overloading (aka multi-method dispatch or signature based dispatch). Is it hard to optimize when combined with features that allow runtime recompilation or method addition. Or is it just hard, anyway? Call site caching is a common optimization for many runtime systems, but multi-methods add additional complexity as well as making it less practical to inline methods. 2) Type morphing / variants (aka value based typing as opposed to variable based) Traditional optimizations simply cannot be applied when you don't know if the type of someting can change in a basic block. Combined with multi-methods, inlining must be done carefully if at all, and probably only for a given threshold of size of the callee. ie. it is easy to consider inlining simple property fetches (getters / setters) but inlining complex methods may result in code bloat. The other issue is I cannot just assign a variant to a register and JIT it to the native instructions because I have to carry around the type info, or every variable needs 2 registers instead of 1. On IA-32 this is inconvenient, even if improved with x64's extra registers. This is probably my favorite feature of dynamic languages, as it simplifies so many things from the programmer's perspective. 3) First class continuations - There are multiple ways to implement them, and I have done so in both of the most common approaches, one being stack copying and the other as implementing the runtime to use continuation passing style, cactus stacks, copy-on-write stack frames, and garbage collection. First class continuations have resource management issues, ie. we must save everything, in case the continuation is resumed, and I'm not aware if any languages support leaving a continuation with "intent" (ie. "I am not coming back here, so you may discard this copy of the world"). Having programmed in the threading model and the contination model, I know both can accomplish the same thing, but continuations' elegance imposes considerable complexity on the runtime and also may affect cache efficienty (locality of stack changes more with use of continuations and co-routines). The other issue is they just don't map to hardware. Optimizing continuations is optimizing for the less-common case, and as we know, the common case should be fast, and the less-common cases should be correct. 4) Pointer arithmetic and ability to mask pointers (storing in integers, etc.) Had to throw this in, but I could actually live without this quite easily. My feelings are that many of the high-level features, particularly in dynamic languages just don't map to hardware. Microprocessor implementations have billions of dollars of research behind the optimizations on the chip, yet the choice of language feature(s) may marginalize many of these features (features like caching, aliasing top of stack to register, instruction parallelism, return address buffers, loop buffers and branch prediction). Macro-applications of micro-features don't necessarily pan out like some developers like to think, and implementing many languages in a VM ends up mapping native ops into function calls (ie. the more dynamic a language is the more we must lookup/cache at runtime, nothing can be assumed, so our instruction mix is made up of a higher percentage of non-local branching than traditional, statically compiled code) and the only thing we can really JIT well is expression evaluation of non-dynamic types and operations on constant or immediate types. It is my gut feeling that bytecode virtual machines and JIT cores are perhaps not always justified for certain languages because of this. I welcome your answers.

    Read the article

  • How to use sshd_config - PermitUserEnvironment option

    - by laks
    I have client1 and client2 both are linux machines. From client1: client1$ssh root@client2 "env" it displays list of ssh variables from client2. Things I did on client2: I want to add new variable to client2 . So I edited sshd_config to PermitUserEnvironment yes and created a file environment under ssh with following entry Hi=Hello then restart sshd /etc/init.d/sshd Now from client1 trying the same command client1$ssh root@client2 "env" didn't provide the new variable "Hi". ref: http://www.raphink.info/2008/09/forcing-environment-in-ssh.html http://www.netexpertise.eu/en/ssh/environment-variables-and-ssh.html/comment-page-1#comment-1703

    Read the article

  • Web service not accessible from behind corporates firewalls - how come?

    - by Niro
    We run a Saas serving a widget which is embedded in customer websites. The service include static javascript code hosted on amazon S3 and dynamic part hosted on EC2 with Scalr (using scalr name servers). We received some feedback from users behind corporate firewalls that they cant access our service (while they can access the sites including the widget). This does not make sense to me since the service is using normal http calls on port 80 and our URL is quite new without any reason to be banned by firewalls. My questions are: 1. Why is the service is not accessible and what can I do about it? 2. Is it possible that one of the following is blocked by corporate firewalls: Amazon s3, the dynamic IP address provided by amazon, Scalr name servers. Any other possible reasons, way to check them and remedies for this? Thanks!

    Read the article

  • Cross-Forest Trust

    - by cdalley
    I am looking at testing a cross-domain trust we can have two domain controllers (with different forests and domain names) setup so we can move everyone onto the new domain. We do NOT run exchange on site and we do not have any links to O365 to AD currently. Onto the problem: I have setup two DCs in a Virtual Machine: They are on the same network 192.168.0.* The Windows 2003 server: Name: OLDSRVR "Clone" of our current Domain Controller IP: 192.168.0.1 Domain: internal.test.com The Windows 2012 server: Name: ADCTEST01 Brand new domain setup from scratch separate to internal.test.com Domain: internal.test2.com IP: 192.168.0.2 OLDSRVR can only see ADCTEST if it has dynamic IP set. If I set a static IP it cannot see it. If I try using the dynamic IP and try to join it gets to the end then complains "??The trust relationship between this workstation and the primary domain failed" Any ideas?

    Read the article

  • PRNG test suite: bitstream and stream length

    - by Martin Trigaux
    On the NIST website, there is a tool called sts (Statistical Test Suite) that allow us to rest the validity of a pseudo-random number generator based on a stream of bits in input. When running the program, there is two variables I am not sure to understand : the stream length and number of bitstream. Is the stream length the size of the file ? The number of bit inside ? The size of a bitstream ? Are the bitstreams subset of the whole file ? Chosen how ? Let say I have a text file containing 1,000,000 bits in ascii. What should be my arguments ? You can find the user manual here if needed (I didn't find explanation about what are these variables in it). Thank you

    Read the article

< Previous Page | 202 203 204 205 206 207 208 209 210 211 212 213  | Next Page >