Search Results

Search found 38064 results on 1523 pages for 'oracle linux'.

Page 604/1523 | < Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >

  • FairScheduling Conventions in Hadoop

    - by dan.mcclary
    While scheduling and resource allocation control has been present in Hadoop since 0.20, a lot of people haven't discovered or utilized it in their initial investigations of the Hadoop ecosystem. We could chalk this up to many things: Organizations are still determining what their dataflow and analysis workloads will comprise Small deployments under tests aren't likely to show the signs of strains that would send someone looking for resource allocation options The default scheduling options -- the FairScheduler and the CapacityScheduler -- are not placed in the most prominent position within the Hadoop documentation. However, for production deployments, it's wise to start with at least the foundations of scheduling in place so that you can tune the cluster as workloads emerge. To do that, we have to ask ourselves something about what the off-the-rack scheduling options are. We have some choices: The FairScheduler, which will work to ensure resource allocations are enforced on a per-job basis. The CapacityScheduler, which will ensure resource allocations are enforced on a per-queue basis. Writing your own implementation of the abstract class org.apache.hadoop.mapred.job.TaskScheduler is an option, but usually overkill. If you're going to have several concurrent users and leverage the more interactive aspects of the Hadoop environment (e.g. Pig and Hive scripting), the FairScheduler is definitely the way to go. In particular, we can do user-specific pools so that default users get their fair share, and specific users are given the resources their workloads require. To enable fair scheduling, we're going to need to do a couple of things. First, we need to tell the JobTracker that we want to use scheduling and where we're going to be defining our allocations. We do this by adding the following to the mapred-site.xml file in HADOOP_HOME/conf: <property> <name>mapred.jobtracker.taskScheduler</name> <value>org.apache.hadoop.mapred.FairScheduler</value> </property> <property> <name>mapred.fairscheduler.allocation.file</name> <value>/path/to/allocations.xml</value> </property> <property> <name>mapred.fairscheduler.poolnameproperty</name> <value>pool.name</value> </property> <property> <name>pool.name</name> <value>${user.name}</name> </property> What we've done here is simply tell the JobTracker that we'd like to task scheduling to use the FairScheduler class rather than a single FIFO queue. Moreover, we're going to be defining our resource pools and allocations in a file called allocations.xml For reference, the allocation file is read every 15s or so, which allows for tuning allocations without having to take down the JobTracker. Our allocation file is now going to look a little like this <?xml version="1.0"?> <allocations> <pool name="dan"> <minMaps>5</minMaps> <minReduces>5</minReduces> <maxMaps>25</maxMaps> <maxReduces>25</maxReduces> <minSharePreemptionTimeout>300</minSharePreemptionTimeout> </pool> <mapreduce.job.user.name="dan"> <maxRunningJobs>6</maxRunningJobs> </user> <userMaxJobsDefault>3</userMaxJobsDefault> <fairSharePreemptionTimeout>600</fairSharePreemptionTimeout> </allocations> In this case, I've explicitly set my username to have upper and lower bounds on the maps and reduces, and allotted myself double the number of running jobs. Now, if I run hive or pig jobs from either the console or via the Hue web interface, I'll be treated "fairly" by the JobTracker. There's a lot more tweaking that can be done to the allocations file, so it's best to dig down into the description and start trying out allocations that might fit your workload.

    Read the article

  • An XKB keyboard map that responds to the left and right shift key individually

    - by mbfisher
    First off, excuse my ignorance of X and XKB; I've been trying to hack together a solution in the hope of being able to achieve what I want without requiring a detailed grasp of it. I'm trying to create an XKB keyboard map on Ubuntu 12.04 that allows me to stipulate which of the two shift keys constitutes the Level2 modifier. Specifically, the 4 key should only produce a $ when the right shift is held, not the left. My reading so far: http://www.charvolant.org/~doug/xkb/html/node5.html http://people.uleth.ca/~daniel.odonnell/Blog/custom-keyboard-in-linuxx11 http://www.x.org/releases/X11R7.5/doc/input/XKB-Enhancing.html Lots of searching! I've attempted to define a custom type, and then refer to it explicitly in a symbols map: /usr/share/X11/xkb/types/mbfisher: default xkb_types "mbfisher" { type "RIGHT_SHIFT" { modifiers = None+Shift_R; map[None] = Level1; map[Shift_R] = Level2; }; } /usr/share/X11/xkb/symbols/mbfisher: default partial alphanumeric_keys xkb_symbols "basic" { name[Group1]= "mbfisher"; key <AE04> { type= "RIGHT_SHIFT", symbols[Group1]= [ 4, dollar ] }; }; I'm then selecting the map with the Ubuntu Keyboard Layout GUI. This obviously disables the alphanumeric keyboard apart from the 4 key, but the dollar sign can still be typed with either shift key. I'm conscious of writing a massive question with lots of useless information so I'll stop here; please ask for anything I've missed out. Any ideas?

    Read the article

  • List full timestamps of files in a tarball

    - by Mechanical snail
    I have a large tar archive and want to see the exact (nanosecond) timestamps that are stored for each file in the archive. In case it's relevant, the tarball is in POSIX-2001 format (tar --format=posix). tar --list --verbose displays the timestamps rounded off to the minute. For comparison, ls --full-time does what I want, but I'd rather not have to extract everything first because it's huge. For my purposes, command-line and GUI tools are both fine.

    Read the article

  • Terse, documented, correct way to create Kerberos-backed user shares in Greyhole

    - by MrGomez
    As a migration strategy away from Windows Home Server (which is currently out of support and intractable for our needs, for a variety of reasons), our little cloister of nerds has targeted Greyhole for our shared use at home. Despite the documentation's terseness, getting the system set up for simple, single-user operation isn't especially difficult, but this scenario fails to service our needs. Among other highlights of the system, we're attempting to emulate Integrated Windows Authentication (with Kerberos) and single-user shares to keep the Windows users in the house happy and well-supported. I'm aware of the underlying systems that go into Greyhole and understand how to set up per-user shares in Samba, but the documentation doesn't seem to support cases for Greyhole to sop up these directories as separate landing zones for replication. Enter my question: are both of these cases (IWA user authentication and user-partitioned personal shares) supported by Greyhole? If so, please cite or link the supporting documentation if it exists.

    Read the article

  • PHP oci8 dll not loading on windows 64 bit XP. What am I doing wrong?

    - by user47354
    on win 64, I installed apache, php etc. Everything works fine, except the oracle part. I can connect to oracle from sql developer which means my tnsnames.ora file is correct. When apache starts, there are no errors in the logs. But when I try to connect to oracle from my database, oracle module php_oci8.dll is not loaded. What am I doing wrong? The oci8.dll line in php.ini is there, it is uncommented There are no errors in the apache logs extension_dir in php.ini file points to the correct location

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • MySQL blocking new connections, and mysqladmin flush-hosts

    - by aidan
    I'm running MySQL on a remote server, and it suddenly started rejecting all connections: $ mysql -h 192.168.1.10 -u root -p ERROR 1129 (00000): Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts' So, I try this flush-hosts command... $ mysqladmin flush-hosts -h 192.168.1.10 -u root -p mysqladmin: connect to server at '192.168.1.10' failed error: 'Host 'web' is blocked because of many connection errors; unblock with 'mysqladmin flush-hosts'' I.e. it's blocking the very un-blocking tool it recommends. Am I doing it wrong, or will I have to resort to ssh/cpanel/physical access?

    Read the article

  • Wireless disconnect randomly with wpa_supplicant reason=2

    - by renenglish
    I installed ubuntu-server 12.04 on my PC , and I use an usb wireless card to join the network. It works ok when I boot up my PC , but the wireless disconnects after a while. I pkill wpa_supplicant and reload the driver rtl8192cu , then it works a again. Then it disconnect again after about a random minutes. Here is the syslog: 22384 May 29 21:49:27 homecenter kernel: [ 6450.459313] wlan1: authenticated 22385 May 29 21:49:27 homecenter kernel: [ 6450.459535] wlan1: associate with f4:ec:38:45:62:74 (try 1) 22386 May 29 21:49:27 homecenter kernel: [ 6450.469080] wlan1: RX AssocResp from f4:ec:38:45:62:74 (capab=0 x431 status=0 aid=3) 22387 May 29 21:49:27 homecenter kernel: [ 6450.469085] wlan1: associated 22388 May 29 21:49:27 homecenter wpa_supplicant[2342]: Associated with f4:ec:38:45:62:74 22389 May 29 21:49:27 homecenter kernel: [ 6450.481933] ADDRCONF(NETDEV_CHANGE): wlan1: link becomes ready 22390 May 29 21:49:27 homecenter wpa_supplicant[2342]: WPA: Key negotiation completed with f4:ec:38:45:62:7 4 [PTK=CCMP GTK=CCMP] 22391 May 29 21:49:27 homecenter wpa_supplicant[2342]: CTRL-EVENT-CONNECTED - Connection to f4:ec:38:45:62: 74 completed (auth) [id=0 id_str=] 22392 May 29 21:49:38 homecenter kernel: [ 6461.472014] wlan1: no IPv6 routers present 22393 May 29 21:49:38 homecenter ntpdate[2263]: step time server 91.189.94.4 offset 0.012758 sec 22394 May 29 21:49:51 homecenter ntpdate[2404]: step time server 91.189.94.4 offset -0.001190 sec 22395 May 29 21:54:38 homecenter kernel: [ 6762.052030] wlan1: deauthenticated from f4:ec:38:45:62:74 (Reas on: 2) 22396 May 29 21:54:38 homecenter wpa_supplicant[2342]: CTRL-EVENT-DISCONNECTED bssid=f4:ec:38:45:62:74 reas on=2 22397 May 29 21:54:38 homecenter kernel: [ 6762.064744] cfg80211: All devices are disconnected, going to re store regulatory settings 22398 May 29 21:54:38 homecenter kernel: [ 6762.064752] cfg80211: Restoring regulatory settings 22399 May 29 21:54:38 homecenter kernel: [ 6762.064757] cfg80211: Calling CRDA to update world regulatory d omain 22400 May 29 21:54:38 homecenter kernel: [ 6762.069938] cfg80211: Ignoring regulatory request Set by core s ince the driver uses its own custom regulatory domain 22401 May 29 21:54:38 homecenter kernel: [ 6762.069943] cfg80211: World regulatory domain updated: 22402 May 29 21:54:38 homecenter kernel: [ 6762.069945] cfg80211: (start_freq - end_freq @ bandwidth), (max_antenna_gain, max_eirp) 22403 May 29 21:54:38 homecenter kernel: [ 6762.069949] cfg80211: (2402000 KHz - 2472000 KHz @ 40000 KH z), (300 mBi, 2000 mBm) 22404 May 29 21:54:38 homecenter kernel: [ 6762.069952] cfg80211: (2457000 KHz - 2482000 KHz @ 20000 KH z), (300 mBi, 2000 mBm) 22405 May 29 21:54:38 homecenter kernel: [ 6762.069956] cfg80211: (2474000 KHz - 2494000 KHz @ 20000 KH z), (300 mBi, 2000 mBm) 22406 May 29 21:54:38 homecenter kernel: [ 6762.069959] cfg80211: (5170000 KHz - 5250000 KHz @ 40000 KH z), (300 mBi, 2000 mBm) 22407 May 29 21:54:38 homecenter kernel: [ 6762.069962] cfg80211: (5735000 KHz - 5835000 KHz @ 40000 KH z), (300 mBi, 2000 mBm)

    Read the article

  • Testing home directory scripts by setting $HOME to the location of the test directory

    - by intuited
    I have an interdependent collection of scripts in my ~/bin directory as well as a developed ~/.vim directory and some other libraries and such in other subdirectories. I've been versioning all of this using git, and have realized that it would be potentially very easy and useful to do development and testing of new and existing scripts, vim plugins, etc. using a cloned repo, and then pull the working code into my actual home directory with a merge. The easiest way to do this would seem to be to just change & export $HOME, eg cd ~/testing; git clone ~ home export HOME=~/testing/home cd ~ screen -S testing-home # start vim, write/revise plugins, edit scripts, etc. # test revisions However since I've never tried this before I'm concerned that some programs, environment variables, etc., may end up using my actual home directory instead of the exported one. Is this a viable strategy? Are there just a few outliers that I should be careful about? Is there a much better way to do this sort of thing?

    Read the article

  • iptables to allow 80 and 443 on chillispot running ddwrt

    - by user76682
    I am having problems setting this up. this is what I am trying to do. I have Chillispot (hotpsot) running on dd-wrt. Everything is setup, but the client wants only 80 and 443 to go through through the hotspot. I found this tutorial for dd-wrt but that doesnt seem to work. http://www.dd-wrt.com/wiki/index.php/Iptables#Allow_HTTP_traffic_only_to_specific_domain.28s.29 Initially I tried to place the options at the top but didnt work. then i flushed the iptables and set only these three. I can see the pkts number grow but for some reason I can browse. root@DD-WRT:~# iptables -nvL FORWARD Chain FORWARD (policy ACCEPT 3105 packets, 2442K bytes) pkts bytes target prot opt in out source destination 1629 230K ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 21,80,443 2346 2792K ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 328 46420 DROP 0 -- * * 0.0.0.0/0 0.0.0.0/0 Heres some info from the router, chillispot is the tun0 interface. root@DD-WRT:~# iptables -vnL FORWARD --line-numbers Chain FORWARD (policy DROP 0 packets, 0 bytes) num pkts bytes target prot opt in out source destination 1 0 0 ACCEPT 47 -- * vlan1 192.168.8.0/24 0.0.0.0/0 2 0 0 ACCEPT tcp -- * vlan1 192.168.8.0/24 0.0.0.0/0 tcp dpt:1723 3 32 1851 ACCEPT 0 -- tun0 * 0.0.0.0/0 0.0.0.0/0 state NEW 4 0 0 ACCEPT 0 -- br0 br0 0.0.0.0/0 0.0.0.0/0 5 48 2408 TCPMSS tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp flags:0x06/0x02 TCPMSS clamp to PMTU 6 756 452K lan2wan 0 -- * * 0.0.0.0/0 0.0.0.0/0 7 756 452K ACCEPT 0 -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 8 0 0 TRIGGER 0 -- vlan1 br0 0.0.0.0/0 0.0.0.0/0 TRIGGER type:in match:0 relate:0 9 0 0 trigger_out 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 10 0 0 ACCEPT 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 state NEW 11 0 0 DROP 0 -- * * 0.0.0.0/0 0.0.0.0/0 12 0 0 DROP 0 -- br0 * 0.0.0.0/0 0.0.0.0/0 13 0 0 DROP 0 -- * br0 0.0.0.0/0 0.0.0.0/0 The interfaces: root@DD-WRT:~# ifconfig -a br0 Link encap:Ethernet HWaddr 00:12:17:CF:80:5F inet addr:192.168.8.1 Bcast:192.168.8.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2371 errors:0 dropped:0 overruns:0 frame:0 TX packets:1862 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:259721 (253.6 KiB) TX bytes:254862 (248.8 KiB) br0:0 Link encap:Ethernet HWaddr 00:12:17:CF:80:5F inet addr:169.254.255.1 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 eth0 Link encap:Ethernet HWaddr 00:12:17:CF:80:5F UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5050 errors:0 dropped:0 overruns:0 frame:0 TX packets:2508 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1066410 (1.0 MiB) TX bytes:376001 (367.1 KiB) Interrupt:5 eth1 Link encap:Ethernet HWaddr 00:12:17:CF:80:61 UP BROADCAST RUNNING ALLMULTI MULTICAST MTU:1500 Metric:1 RX packets:729 errors:0 dropped:0 overruns:0 frame:114693 TX packets:697 errors:2 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107869 (105.3 KiB) TX bytes:473134 (462.0 KiB) Interrupt:4 Base address:0x1000 etherip0 Link encap:Ethernet HWaddr 1E:13:B7:09:CC:8C BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MULTICAST MTU:16436 Metric:1 RX packets:18 errors:0 dropped:0 overruns:0 frame:0 TX packets:18 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1210 (1.1 KiB) TX bytes:1210 (1.1 KiB) teql0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 NOARP MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:100 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) tun0 Link encap:UNSPEC HWaddr 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 inet addr:192.168.182.1 P-t-P:192.168.182.1 Mask:255.255.255.0 UP POINTOPOINT RUNNING MTU:1500 Metric:1 RX packets:662 errors:0 dropped:0 overruns:0 frame:0 TX packets:587 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:10 RX bytes:92167 (90.0 KiB) TX bytes:427657 (417.6 KiB) vlan0 Link encap:Ethernet HWaddr 00:12:17:CF:80:5F UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2371 errors:0 dropped:0 overruns:0 frame:0 TX packets:1864 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:269558 (263.2 KiB) TX bytes:262680 (256.5 KiB) vlan1 Link encap:Ethernet HWaddr 00:12:17:CF:80:60 inet addr:10.3.2.47 Bcast:10.255.255.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2675 errors:0 dropped:0 overruns:0 frame:0 TX packets:645 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:705429 (688.8 KiB) TX bytes:102197 (99.8 KiB) The routing table: root@DD-WRT:~# netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 192.168.182.0 0.0.0.0 255.255.255.0 U 0 0 0 tun0 10.3.2.0 0.0.0.0 255.255.255.0 U 0 0 0 vlan1 192.168.8.0 0.0.0.0 255.255.255.0 U 0 0 0 br0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 br0 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 10.3.2.1 0.0.0.0 UG 0 0 0 vlan1 Highly appreciate your help. TIA, Arun

    Read the article

  • Search all files containing text

    - by enthdegree
    With Busybox, how do you search for an expression within a bunch of files recursively through a bunch of directories, but only look through text files? We don't know what the file's suffix is going to be; it could be .sh, it could be nothing, it could be something else. I was considering somehow basing the search on encoding although I am not quite sure what the encoding would be either. I've tried busybox grep -r but it searches through binary files too, which wastes a lot of time.

    Read the article

  • Developing an Implementation Plan with Iterations by Russ Pitts

    - by user535886
    Developing an Implementation Plan with Iterations by Russ Pitts  Ok, so you have come to grips with understanding that applying the iterative concept, as defined by OUM is simply breaking up the project effort you have estimated for each phase into one or more six week calendar duration blocks of work. Idea being the business user(s) or key recipient(s) of work product(s) being developed never go longer than six weeks without having some sort of review or prototyping of the work results for an iteration…”think-a-little”, “do-a-little”, and “show-a-little” in a six week or less timeframe…ideally the business user(s) or key recipients(s) are involved throughout. You also understand the OUM concept that you only plan for that which you have knowledge of. The concept further defined, a project plan initially is developed at a high-level, and becomes more detailed as project knowledge grows. Agreeing to this concept means you also have to admit to the fallacy that one can plan with precision beyond six weeks into a project…Anything beyond six weeks is a best guess in most cases when dealing with software implementation projects. Project planning, as defined by OUM begins with the Implementation Plan view, which is a very high-level perspective of the effort estimated for each of the five OUM phases, as well as the number of iterations within each phase. You might wonder how can you predict the number of iterations for each phase at this early point in the project. Remember project planning is not an exact science, and initially is high-level and abstract in nature, and then becomes more detailed and precise as the project proceeds. So where do you start in defining iterations for each phase for a project? The following are three easy steps to initially define the number of iterations for each phase: Step 1 => Start with identifying the known factors… …Prior to starting a project you should know: · The agreed upon time-period for an iteration (e.g 6 weeks, or 4 weeks, or…) within a phase (recommend keeping iteration time-period consistent within a phase, if not for the entire project) · The number of resources available for the project · The number of total number of man-day (effort) you have estimated for each of the five OUM phases of the project · The number of work days for a week Step 2 => Calculate the man-days of effort required for an iteration within a phase… Lets assume for the sake of this example there are 10 project resources, and you have estimated 2,536 man-days of work effort which will need to occur for the elaboration phase of the project. Let’s also assume a week for this project is defined as 5 business days, and that each iteration in the elaboration phase will last a calendar duration of 6 weeks. A simple calculation is performed to calculate the daily burn rate for a single iteration, which produces a result of… ((Number of resources * days per week) * duration of iteration) = Number of days required per iteration ((10 resources * 5 days/week) * 6 weeks) = 300 man days of effort required per iteration Step 3 => Calculate the number of iterations that can occur within a phase Next calculate the number of iterations that can occur for the amount of man-days of effort estimated for the phase being considered… (number of man-days of effort estimated / number of man-days required per iteration) = # of iterations for phase (2,536 man-days of estimated effort for phase / 300 man days of effort required per iteration) = 8.45 iterations, which should be rounded to a whole number such as 9 iterations* *Note - It is important to note this is an approximate calculation, not an exact science. This particular example is a simple one, which assumes all resources are utilized throughout the phase, including tech resources, etc. (rounding down or up to a whole number based on project factor considerations). It is also best in many cases to round up to higher number, as this provides some calendar scheduling contingency.

    Read the article

  • alternative filesystems for SSD

    - by freedrull
    I am tired of watching fsck check my filesystem when my eeepc 901 shuts down abruptly due to a crash. I know that with a journaling filesystem, I won't have to wait for a check. However, I am well aware of the poor I/O performance of the SSD, so I can imagine using a journaling filesystem being even more frustrating, since there will be constant writes to the journal? I will buy a new laptop without such a crummy ssd someday but, is there anything I can do now, on the software side of things?

    Read the article

  • Patching and PCI Compliance

    - by Joel Weise
    One of my friends and master of the security universe, Darren Moffat, pointed me to Dan Anderson's blog the other day.  Dan went to Toorcon which is a security conference where he went to a talk on security patching titled, "Stop Patching, for Stronger PCI Compliance".  I realize that often times speakers will use a headline grabbing title to create interest in their talk and this one certainly got my attention.  I did not go to the conference and did not see the presentation, so I can only go by what is in the Toorcon agenda summary and on Dan's blog, but the general statement to stop patching for stronger PCI compliance seems a bit misleading to me.  Clearly patching is important to all systems management and should be a part of any organization's security hygiene.  Further, PCI does require the patching of systems to maintain compliance.  So it's important to mention that organizations should not simply stop patching their systems; and I want to believe that was not the speakers intent. So let's look at PCI requirement 6: "Unscrupulous individuals use security vulnerabilities to gain privileged access to systems. Many of these vulnerabilities are fixed by vendor- provided security patches, which must be installed by the entities that manage the systems. All critical systems must have the most recently released, appropriate software patches to protect against exploitation and compromise of cardholder data by malicious individuals and malicious software." Notice the word "appropriate" in the requirement.  This is stated to give organizations some latitude and apply patches that make sense in their environment and that target the vulnerabilities in question.  Haven't we all seen a vulnerability scanner throw a false positive and flag some module and point to a recommended patch, only to realize that the module doesn't exist on our system?  Applying such a patch would obviously not be appropriate.  This does not mean an organization can ignore the fact they need to apply security patches.  It's pretty clear they must.  Of course, organizations have other options in terms of compliance when it comes to patching.  For example, they could remove a system from scope and make sure that system does not process or contain cardholder data.  [This may or may not be a significant undertaking.  I just wanted to point out that there are always options available.] PCI DSS requirement 6.1 also includes the following note: "Note: An organization may consider applying a risk-based approach to prioritize their patch installations. For example, by prioritizing critical infrastructure (for example, public-facing devices and systems, databases) higher than less-critical internal devices, to ensure high-priority systems and devices are addressed within one month, and addressing less critical devices and systems within three months." Notice there is no mention to stop patching one's systems.  And the note also states organization may apply a risk based approach. [A smart approach but also not mandated].  Such a risk based approach is not intended to remove the requirement to patch one's systems.  It is meant, as stated, to allow one to prioritize their patch installations.   So what does this mean to an organization that must comply with PCI DSS and maintain some sanity around their patch management and overall operational readiness?  I for one like to think that most organizations take a common sense and balanced approach to their business and security posture.  If patching is becoming an unbearable task, review why that is the case and possibly look for means to improve operational efficiencies; but also recognize that security is important to maintaining the availability and integrity of one's systems.  Likewise, whether we like it or not, the cyber-world we live in is getting more complex and threatening - and I dont think it's going to get better any time soon.

    Read the article

  • /proc/net/dev and /sys/class/net/ bogus network interface names

    - by sfink
    I am constructing a list of network interfaces to monitor based on the contents of /proc/net/dev. But I am getting some bogus interfaces in the list: __tmp1104705027 __tmp974528607 Where do those come from? They also show up in /sys/class/net/: # ls -1 /sys/class/net/ eth0 eth1 eth2 eth3 lo sit0 __tmp1104705027 __tmp974528607 For now, I think I'll just ignore anything starting with __tmp, but I'd like to know what they are and where they come from. This is on a recompiled CentOS 5.3 kernel: 2.6.18-128.7.1.el5.tvh.7PAE #1 SMP PREEMPT

    Read the article

  • How to redirect logs from Cisco firewall to a specific file ?

    - by nitins
    We need to redirect the logs from our Cisco firewall(SA520-K9) to syslogd server(it's a CentOS server). The settings are done on the firewall. But the messages from firewall are going to /var/log/messages and console instead of /var/log/firewall.log which is our requirent. *.info;mail.none;authpriv.none;cron.none /var/log/messages authpriv.* /var/log/secure mail.* -/var/log/maillog cron.* /var/log/cron *.emerg * uucp,news.crit /var/log/spooler local7.* /var/log/firewall.log This is our syslog config file. Any advices ?

    Read the article

  • chroot on OSX as a different OS

    - by ekaqu
    I was wondering if anyone has been able to use chroot on OSX to run another OS (ubuntu, centos). I know that they are very different, but almost everything I want to use this for wouldn't care about anything at the level of the kernel, so was hoping there would be a way to do this without using a VM. Based off my google searches, I see this question is asked, but no real answer other than "try a VM". Would really like to do this without a VM though.

    Read the article

  • How to upgrade XBMC Live via command line?

    - by sunpech
    I've been unable to do a fresh install of XBMC Live 9.11 to my hard drive. Everytime it fails at the Install System step. But I am able to get XBMC Live 9.04.1 to install successfully. How do I upgrade XBMC Live 9.04.1 to 9.11? I understand that Ctrl+Shift+F2 brings up the command line, but what are the next set of commands to run?

    Read the article

  • High server load - [jbd2/md1-8] using 99.99% IO

    - by Alex
    I've been having spike in load over the last week. This usually occurs once or twice a day. I've managed to identify from iotop that [jbd2/md1-8] is using 99.99 % IO. During the high load times there is no high traffic to the server. Server specs are: AMD Opteron 8 core 16 GB RAM 2x2.000 GB 7.200 RPM HDD Software Raid 1 Cloudlinux + Cpanel Mysql is properly tuned Apart from the spikes, the load usually is around 0.80 at most. I've searched around but can't find what [jbd2/md1-8] does exactly. Has anyone had this problem or does anyone know a possible solution? Thank you. UPDATE: TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 16:05:36 399 be/3 root 0.00 B/s 38.76 K/s 0.00 % 99.99 % [jbd2/md1-8]

    Read the article

  • What effect does RAID stripe size have on read-ahead settings?

    - by stbrody
    I'm trying to figure out the correct read-ahead values to set on a RAID10 array, and I'm wondering if the RAID stripe size should factor into my considerations. I've heard conflicting information about this in the past. I once heard that you should always set your read-ahead value to a multiple of the RAID stripe size, and never below the stripe size, because that is the minimum amount of data the RAID controller will ever try to read at once. Someone else told me, however, that setting read-ahead below the stripe size is fine, and can, in fact, increase the amount of parallel reads you can do across devices in the array, increasing performance and decreasing load on the array. So which is it? Do read-ahead settings that aren't multiples of the stripe size make sense or not?

    Read the article

  • Executing local script/command on remote server

    - by Ian McGrath
    I have a command that I want to run on machine B from machine A. If I run the command on machine B locally, it works fine. Here is the command: for n in `find /data1/ -name 'ini*.ext'` ; do echo cp $n "`dirname $n `/` basename $n .ext`"; done From machine A, I issue this command ssh user@machineB for n in `find /data1/ -name 'ini*jsem'` ; do echo cp $n "`dirname $n `/` basename $n .jsem`"; done But I get error syntax error near unexpected token do What is wrong? I think it has something to do with double quotes, single quotes, semi colon because executing command ssh user@machineB ls works fine. So not issue of authentication or something else. Thanks

    Read the article

< Previous Page | 600 601 602 603 604 605 606 607 608 609 610 611  | Next Page >