Search Results

Search found 64472 results on 2579 pages for 'data context'.

Page 707/2579 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • Drbd Primary/Primary + iSCSI: accessing to different files avoids split brain?

    - by Eddie C.
    I have a question / curiosity about split-brain on a Drbd Primary/Primary configuration. Supposing two nodes (hosts), host1 and host2 configured with Drbd Primary/Primary and two different shares (NFS, CIFS o iSCSI) of a replicated area (saying /drbd) /drbd/file1.data /drbd/file2.data If a pool of client would access only by host1 share reading and wrinting only file1.data and another pool only by host2 share to file2.data, this scenario should avoid split brain situation in case of one node failure or it's just a conjecture? The final purpose is load balance between the two nodes in normal condition and collapsing to one node only in case of failure. Thank you! Eddie

    Read the article

  • How to make new file permission inherit from the parent directory?

    - by Wai Yip Tung
    I have a directory called data. Then I am running a script under the user id 'robot'. robot writes to the data directory and update files inside. The idea is data is open for both me and robot to update. So I setup the permission and owner group like this drwxrwxr-x 2 me robot-grp 4096 Jun 11 20:50 data where both me and robot belongs to the 'robot-grp'. I change the permission and the owner group recursively like the parent directory. I regularly upload new files into the data directory using rsync. Unfortunately, new files uploaded does not inherit the parent directory's permission as I hope. Instead it looks like this -rw-r--r-- 1 me users 6 Jun 11 20:50 new-file.txt When robot tries to update new-file.txt, it fails due to lack of file permission. I'm not sure if setting umask helps. In anycase the new files does not really follow it. $ umask -S u=rwx,g=rx,o=rx I'm often confounded by Unix file permission. Do I even have a right plan? I'm using Debian lenny.

    Read the article

  • Monitor uSWGI via Nagios: invalid socket

    - by webjay
    I'm trying to monitor uSWGI via Nagios, but according to uWSGI I have specified an invalid socket. The socket path I got from the JSON config file which also says chmod-socket: 666 so I have a hunch that the problem is permission based. The socket file is owned by www-data who I don't want to tinker with, so any other ways? uwsgi --socket=/tmp/app.sock --nagios detected binary path: /usr/local/bin/uwsgi UWSGI UNKNOWN: you have specified an invalid socket ls -l /tmp/app.sock srw-rw-rw- 1 www-data www-data 0 2012-10-26 17:00 /tmp/app.sock

    Read the article

  • Checking the configuration of two systems to determine changes

    - by None
    We are standing up a replicant data center at work and need to ensure that the new data center is configured (nearly) identically to the original. The new data center will be differently addressed and named than the original and will have differing user accounts, but all the COTS, patches, and configurations should be the same. We would normally ghost the original servers and install those images onto the new machines, however, we have a few problematic pieces of COTS that require we install them outside of an image due to how they capture the setup of the network during their installation and maintain it within their configuration information (in some cases storing it in various databases). We have tried multiple times and this piece of COTS cannot be captured within a ghost image unless the destination machine will have an identical network setup (all the same IPs, hostnames, user accounts, etc across the entire network) as the original. In truth, it is the setup of these special COTS that I want to audit the most because they are difficult to install and configure in the first place. In light of the fact that we can’t simply ghost, I’m trying to find a reasonable manner to audit the new data center and check to see if it is setup like the original (some sort of system wide configuration audit or integrity check). I’m considering using something like Tripwire for Servers to capture the configuration on the source machines and then run an audit on the destination machines. I understand that it will still show some differences due to the minor config changes, but I’m hoping that it will eliminate the majority of the work. Here are some of the constraints I’m working under: Data center is comprised of multiple Windows and Linux machines of differing versions (about 20 total) I absolutely cannot ghost or snap any other type of image of these machines … at least not in their final configuration I want to audit the final configuration to ensure all of the COTS, patches, configurations, etc are installed and setup properly (as compared to the original data center) I would rather not install any additional tools on these machines … I’d much rather run it from a standalone machine or off a DVD Price of tools is important but not an impossible burden, however, getting a solution soon is important (I can’t take the time to roll my own tools to do this) For the COTS that stores the network information, I don’t know all of the places it stores the network information … so it would be unlikely I could find a way in the near future to adjust its setup after the installation has occurred Anyone have any thoughts or alternate approaches? Can anyone recommend tools that would be usable for system wide configuration audits?

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • VSFTPD says "500 OOPS: cannot change directory"

    - by Aman Kumar Jain
    As soon as I login with my virtual users in ftp I get "cannot change directoy", I have the following configuration in vsftpd.conf. Please suggest listen=YES anonymous_enable=NO local_enable=YES write_enable=YES local_umask=002 dirmessage_enable=YES xferlog_enable=YES connect_from_port_20=YES chroot_local_user=YES secure_chroot_dir=/var/run/vsftpd pam_service_name=vsftpd virtual_use_local_privs=YES guest_enable=YES user_sub_token=$USER hide_ids=YES user_config_dir=/data/some-path/ftp/users local_root=/data/some-path/ftp/data/$USER guest_username=vsftpd

    Read the article

  • Bugzilla Install question - I'm stuck

    - by Nabeel
    I run Bugzilla's checksetup.pl (migrating an older version), and it always returns: Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.005 Had to create DBD::mysql::dr::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1229, <DATA> line 225. Had to create DBD::mysql::db::imp_data_size unexpectedly at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. Use of uninitialized value in subroutine entry at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBI.pm line 1259, <DATA> line 225. There was an error connecting to MySQL: Undefined subroutine &DBD::mysql::db::_login called at /usr/lib64/perl5/site_perl/5.8.8/x86_64-linux-thread-multi/DBD/mysql.pm line 142, <DATA> line 225. MySQL Version: [root@bugzilla-core TMP]# mysql --version mysql Ver 14.12 Distrib 5.0.60sp1, for redhat-linux-gnu (x86_64) using readline 5.1 And mysql_config: [root@bugzilla-core TMP]# mysql_config Usage: /data01/mysql-5.0.60/bin/mysql_config [OPTIONS] Options: --cflags [-I/data01/mysql-5.0.60/include -g] --include [-I/data01/mysql-5.0.60/include] --libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient -lz -lcrypt -lnsl -lm -lmygcc] --libs_r [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqlclient_r -lz -lpthread -lcrypt -lnsl -lm -lpthread -lmygcc] --socket [/tmp/mysql.sock] --port [0] --version [5.0.60sp1] --libmysqld-libs [-rdynamic -L/data01/mysql-5.0.60/lib -lmysqld -lz -lpthread -lcrypt -lnsl -lm -lpthread -lrt -lmygcc] Now, I've tried the latest version of DBD-mysql (4.0.14). I'm completely lost and stumped. I'm not sure where to go from here. Scouring the 'webs haven't returned anything fruitful. Any ideas?

    Read the article

  • Easy Transfer from a dead computer

    - by Nathan DeWitt
    I had a computer that electrocuted me and the company sent me a new one. The hard drive from the old computer works fine and is in my new computer. I would like to transfer my files from the old drive to the new one, preferably using Easy Transfer (old & new computers were Win7). When I go through the Easy Transfer wizard, it assumes my old computer is running and that I can run a process to backup all my data to a single file. However, in my case I have the system drive in my new computer and want to pull the data off it. I would like to avoid rebooting the old computer, to avoid damage to myself or my data. I would like to avoid booting into the old system drive, as my new hardware is significantly different and I imagine I'll run into some missing hardware issues. What's the easiest way to get my data off this drive?

    Read the article

  • Using a named pipe to simulate a serial port on a VMware virtual machine (linux host and client)

    - by Dave M
    Trying to write a python program to create a simulated data stream and feed it, through a named pipe, to a VMware virtual machine. The host is running Ubuntu 11.10 and VMware player 5.0.0. The Vm is running Ubuntu netbook 10.04. I am able to get the pipe working on the local machine but I am not able to get the pipe to pass data through the virtual serial port to the programs running on the virtual machine. #!/usr/bin/python import os # # Create a named pipe that will be used as the serial port on a VMware virtual machine SerialPipe = '/tmp/gpsd2NMEA' try: os.unlink(SerialPipe) except: pass os.mkfifo(SerialPipe) # # Open the named pipe NMEApipe = os.open(SerialPipe, os.O_RDWR|os.O_NONBLOCK) # # Write a string to the named pipe NMEAtime = "235959" os.write(NMEApipe, str( '%s\n' % NMEAtime )) Test to see if the python program is working on the host machine (displays 235959 if data is passing through the pipe) $ cat /tmp/gpsd2NMEA 235959 Serial port as defined in the VMware .vmx file: serial0.present = "TRUE" serial0.startConnected = "TRUE" serial0.fileType = "pipe" serial0.fileName = "/tmp/gpsd2NMEA" serial0.pipe.endPoint = "client" serial0.autodetect = "FALSE" serial0.tryNoRxLoss = "TRUE" serial0.yieldOnMsrRead = "TRUE" Test to see if the serial port in the VM is receiving data $ cat /dev/ttyS0 or $ minicom -D /dev/ttyS0 or $ stty -F /dev/ttyS0 cs8 -parenb -cstopb 115200 $ echo < /dev/ttyS0 None of these display any data from the python program.

    Read the article

  • SELinux Contexts

    - by Josh
    I am configuring Apache\PHP and noticed Apache complaining about permissions with the php shared object. Starting httpd: httpd: Syntax error on line 206 of /etc/httpd/conf/httpd.conf: Cannot load /usr/lib/httpd/modules/libphp5.so into server: /usr/lib/httpd/modules/libphp5.so: cannot restore segment prot after reloc: Permission denied I looked at the context (started fine with enforcement off) and found: [root@HDSSERVER conf]# ls --lcontext /usr/lib/httpd/modules/libphp5.so -rwxr-xr-x 1 root:object_r:httpd_modules_t root root 15565418 May 10 08:39 /usr/lib/httpd/modules/libphp5.so Shouldn't httpd (apache) be able to access files with a context of httpd_modules_t? I got it fixed by applying chcon -t textrel_shlib_t '/usr/lib/httpd/modules/libphp5.so' But I would think the httpd version would work before this one. Can someone explain this to me?

    Read the article

  • Deployment Workbench no longer available after PXE boot

    - by Patrick
    Our build process revolves around windows Deployment Workbench. Unfortunately this was setup by someone who is no longer with the company, and no-one has ever dared/needed to make any changes. The other day it stopped working. It turns out that one of our build guys started thinking about changing some stuff in it, clicked something and now it no longer works (He is saying now that he right clicked on the 'LAB' entry in 'Deployment points' and hit 'Update', which took some time to run through apparently). The job has fallen on me to resolve and frankly I'm not sure what I'm doing. I was wondering if someone with more experience than me can provide some pointers as to troubleshooting cos I'm feeling quite a lot in the dark here. On the server I have Deployment Workbench up and running (MMC snapin) version 3.0. There is a WDS service that appears to be running ok, as does the tFTPd service. Nothing specific to this in event logs. From the client side; PXE boot works and gets you to the Win PE launch, and it has the correct company logo as the background (proving to me that its loading win PE from the network). WPEINIT runs, and asks for domain credentials, here the team simply put User/Pass/Domain in the boxes and click ok. Normally the build would kick off. Instead they get an error message saying that the \NATBLU01\Distribution$ share isn't available. Checking \NATBLU01\Distribution$ shows that its there and accessible over the network. Security/permissions seem ok, even 'ANONYMOUS LOGON' has read access to that share so I don't see that being a problem. Digging the trace files from C:\MININT\SMSOSD\OSDLOGS\ after an attempt to run the build I can see an error saying much the same - <![LOG[Validating connection to \\NATBLU01\Distribution$]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[FindFile: The file OSDConnectToUNC.exe could not be found in any standard locations.]LOG]!><time="16:42:14.000+000" date="03-15-2012" component="LiteTouch" context="" type="1" thread="" file="LiteTouch"> <![LOG[The network location cannot be reached. For information about network troubleshooting, see Windows Help.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> <![LOG[ERROR - Unable to map a network drive to \\NATBLU01\Distribution$.]LOG]!><time="16:42:24.000+000" date="03-15-2012" component="LiteTouch" context="" type="3" thread="" file="LiteTouch"> BDD.LOG shows much the same. Full copies of the .LOG files can both files be found here : BDD.LOG LITETOUCH.LOG I can get to a command prompt from the Win PE that boots from PXE, however there isn't any network stuff there. IPCONFIG returns nothing so none of the tests I would usually run resolve anything. I'm at a loss frankly. I did wonder if I could perhaps start a new build process but if the change to the DeploymentWorkbench has knocked it offline I don't think I'm going to be able to create a new deployment. Failing that; we do have a deployment point labeled type 'Media' which appears to be a DVD ISO image of one of the builds, but its dated 2008, is it possible to export the network build to .ISO and build from DVD? We are looking at new hardware to run this from anyway (for the impending Windows 7 roll out) so a temporary work round isn't going to be too much of a problem. All assistance is appreciated! EDIT : OK. Got it working again. Solution was close to Newmanth's idea. The problem was that our PE image didn't appear to be connecting the network. I had an older copy of the PE boot.WIM on a stick that I had been using for other purposes. I booted that and correctly got a network connection. Showed a correct internal IP and could ping out etc etc. However I was still getting the same errors in all the logs and in when wpeinit was running. What I did seperately was to update the PE image that DeploymentWorkbench was pushing out to display a different back ground. I wanted to prove that I was working in the correct place. Turns out that I wasn't. I went and looked at the other deployment stuff we had on this machine, Windows Deployment Services was installed and although all the install images are off line the boot image was online, so I uploaded the copy from my stick to that. Booted straight off. And fixed. Working. Yay! For anyone stumbling across this in the future you may find that although your deployment images are located in the DeploymentWorkbench, the Win PE boot image you are launching from is located in the associated Windows Deployment Services images.

    Read the article

  • Owner of uploads directory is `www-data` but this prevents FTP access via PHP scripts

    - by letseatfood
    To allow write access to Apache, I needed to chown www-data:www-data /var/www/mysite/uploads to my site's upload folder. This allows me to delete files from the folder via unlink() in a PHP script. Unfortunately, this prevents another PHP script, which uses FTP functions, from working. I think it is because the FTP user is mike and now that the uploads directory is owned by www-data, mike cannot access it. I added mike to the group www-data, but this does not fix the issue. Can somebody advise me on how to allow PHP FTP functions to work in addition to file deletion using PHP's unlink() function?

    Read the article

  • Tools to monitor guest OS performance in vSphere

    - by Quick Joe Smith
    I am looking for some tool or way to retrieve performance data from guest VMs running under vSphere 4.1. I am currently interested in the 4 basic metrics: CPU(%), Memory(%), Disk availability(%) & Network utilisation(Kb/s). The issue I have is that all of vSphere's performance data is from a ESXi host perspective (active, shared, consumed, overhead, swapped etc.) which is far removed from the data from the VM's own perspective. For instance, I have a Windows server VM idling, using around 410MB (~25% of its allocated 2GB) as reported by Task Manager, and this is the value I'm after. vSphere's metrics seem unable to arrive at this figure by any reliable and repeatable means. Is anyone aware of tools that can obtain this kind of data? The simpler, the better.

    Read the article

  • Cluster FIle System

    - by Ben
    We are looking for to choose a clustered file system for our in house appplication. Let me first highlight my requirement. we have a storage and 2 servers at present.We get the data files from remote servers to our server and on both servers we are running our application to access those data and make a final result as per our requirements. In future may be after 3-4 months, we can add another servers in current cluster pool to handle more data load from remote location data senders. So my requirement is that to integrate same storage partition on 2-3 servers , it might be 4-5 more servers in future, My application read data from storage partition and write back to storage partition. Is there any bottleneck / limitation from RHCS , GFS2 or anything.? We are new with RHCS + GFS and all. Can we have any other better approach or someway to deal with our requirement light way? what is the best OS version for this ? how's RHEL 6.4 64 bit ? please share some case study or some gudie reference as per past experiences with such environnmnets Regards, Ben

    Read the article

  • Filtering columns in SQL Server replication - how?

    - by truthseeker
    Hi, I need to replicate some data from two tables in one database to another databases. I used snapshot replication. The issue is that I would like to replicate only some selected columns and the others should stay with untouched data. I don't want to loose their data. The sours of those columns is other system. So I need to replicate only data from my columns. Do anybody know how to achieve this?

    Read the article

  • Apache crashes a few seconds after the start.

    - by Nacho
    Hi, i've got a problem with apache. When i try to start it (/etc/init.d/apache2 start) it dies after a few seconds. It shows up on "ps aux" consuming a lot of memory and then dies. I don't know what could be causing apache to consume this amount of memory: USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 13379 1.0 0.3 14376 3908 ? Ss 22:31 0:00 /usr/sbin/apache2 -k start www-data 13383 0.0 0.4 197316 4196 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13390 0.0 0.3 172728 4172 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13396 0.0 0.3 156336 4160 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13400 0.0 0.3 148140 4156 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start www-data 13403 0.0 0.3 131748 4148 ? Sl 22:31 0:00 /usr/sbin/apache2 -k start Here is a htop screenshot: http://i.imgur.com/N4Chh.png It happened suddenly, no change had been made to server config, so i don't know whats causing it. The error log of my virtual servers shows this: [Sun Jan 30 22:19:50 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 11 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:19:55 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=9685): Couldn't create worker thread 19 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:29:40 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=12009): Couldn't create worker thread 18 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:31:06 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=13396): Couldn't create worker thread 15 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:02 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 16 in daemon process 'fb.ebookmetafinder.com'. [Sun Jan 30 22:35:07 2011] [alert] (11)Resource temporarily unavailable: mod_wsgi (pid=14009): Couldn't create worker thread 17 in daemon process 'fb.ebookmetafinder.com'. I'm on a ubuntu server vps and i use mod_wsgi with django. Thanks.

    Read the article

  • How to backup a remote VPS machine?

    - by morpheous
    I am considering opting for a VPS solution, with the server running Ubuntu server. I am pretty new to this, and I need to come up with a backup policy for my server data. Initial data is likely to be about 80Mb, and I expect the data to grow at approximately 5Mb to 10 Mb a day. Can anyone recommend: A backup/restore policy (best practises for a small startup) Which tools to use for backup? Another thing that is not clear to me is - where are the files backed up to normally (in the case of remote servers). If the files are backed up to the same machine (or even to another machine but with the same host), there is potentially, a single point of failure). How do people normally backup their server data, and is the probability of machine meltdown or the host company server farm "catching fire" so remote as not to be worth worrying about - especially for a small (read one man) startup like me?

    Read the article

  • Which network management system (NMS) to choose?

    - by QrystaL
    I need to integrate NMS in large enterprise system for data collection purposes. Primary requirements: collection by SNMP great scalability (up to 1,000 devices with 1,000 interfaces each) failover data storage in Oracle DBMS integration API (configuration, data access) Any ideas would be appreciated...

    Read the article

  • google sitemap generator installation selinux

    - by adnan
    when i trying to install google sitemap generator i received this error Change security context of to system_u:object_r:httpd_modules_t install: WARNING: ignoring --context (-Z); this kernel is not SELinux-enabled Program files successfully copied. ./install.sh: line 488: 14284 Segmentation fault "$DEST_DIR/$BIN_DIR/$DAEMON_BIN" update_setting $update_setting_flags "apache_conf=$APACHE_CONF" "apache_group=$APACHE_GROUP" > /dev/null after choosing the submiting file settings i tried to unistall it & excute this getenforce try again but the same problem when i enter this dir /etc/sysconfig/selinux. it is not contain the selinux file my os centos 6 X86_64

    Read the article

  • How do I get write access to ubuntu files from Windows?

    - by Steven
    I'm running Ubuntu 11.10 on my Virtual Machine as a web server. I've mounted the W:/ drive in Win 7 to my /www folder in Ubuntu. I can read the files, but I'm not able to write to the files. In Samba, I have created the following user: <www-data> = "<www-data>" And given guest ok for the www folder: [www] comment = Ubuntu WWW area path = /var/www browsable = yes guest ok = yes read only = no create mask = 0755 ;directory mask = 0775 force user = www-data force group = www-data I've also run sudo chmod -R 755 www to make ensure correct rw access. What am I missing in order to get write access to my ubuntu files from Windows?

    Read the article

  • Hell: NTFS "Restore previous versions"...

    - by ttsiodras
    The hell I have experienced these last 24h: Windows 7 installation hosed after bluetooth driver install. Attempting to recover using restore points via "Repair" on the bootable Win7 installation CD. Attempting to go back one day in the restore points. No joy. Attempting to go back two days in the restore points. No joy. Attempting to go back one week in the restore points. Stil no joy. Windows won't boot. Apparently something is REALLY hosed. And then it hits me - PANIC - the restore points somehow reverted DATA files to their older versions! Word, Powerpoint, SPSS, etc document versions are all one week old now. Using the "freshest" restore point. Failed to restore yesterday's restore point!!! I am stuck at old versions of the data!!! Booting KNOPPIX, mounting NTFS partition as read-only under KNOPPIX. Checking. Nope, the data files are still the one week old versions. Booting Win7 CD, Recovery console - Cmd prompt - navigating - yep, data files are still one week old. Removing the drive, mounting it under other Win7 installation. Still old data. Running NTFS undelete on the drive (read-only scan), searching for file created yesterday. Not found. Despair. At this point, idea: I will install a brand new Windows installation, keeping the old one in Windows.old (default behaviour of Windows installs). I boot the new install, I go to my C:\Data\ folder, I choose "Restore previous versions", click on yesterday's date, and click open... YES! It works! I can see the latest versions of my files (e.g. from yesterday). Thank God. And then, I try to view the files under the "yesterday snapshot-version" of c:\Users\MyAccount\Desktop ... And I get "Permission Denied" as soon as I try to open "Users\MyAccount". I make sure I am an administrator. No joy. Apparently, the new Windows installation does not have access to read the "NTFS snapshots" or "Volume Shadow Snapshots" of my old Windows account! Cross-installation permissions? I need to somehow tell the new Windows install that I am the same "old" user... So that I will be able to access the "Users\MyAccount" folder of the snapshot of my old user account. Help?

    Read the article

  • virtual machines and cryptography

    - by Unknown
    I suspect I'm a bit offtopic with the site mission, but it seems me more fitting for the question than stackoverflow i'm in preparing to create a vm with sensible data (personal use, it will be a web+mail+... appliance of sorts), i'd like to protect the data even with cryptography; the final choice have to be cross-platform for the host basically, I have to choose between guest system-level cryptography (say, dm-crypt or similar) or host level cryptography with truecrypt. do you think that the "truecrypt-volume contained virtualized disks" approach will hit the i/o performance of the vm badly (and therefore dm-crypt like approaches into the vm would be better), or is it doable? I'd like to protect all the guest data, not only my personal data, to be able to suspend the vm freely without worrying for the swap partition, etc

    Read the article

  • cannot access new drive through nfs

    - by l.thee.a
    I am running nfs-kernel-server to access my files on my linux machine(ubuntu - /share). The disk I have been using is full. So I have added a new disk and mounted it to /share/data. My other pc mounts the /share folder to /mnt/nfs; but cannot see the contents of /mnt/nfs/data. I have tried adding /share/data to /etc/exports, but it did not help. What do I do? PS: I am looking for another solution than explicitly mounting /share/data on the second drive.

    Read the article

  • ldap sync with outlook

    - by Dr Casper Black
    I have a task to research the possibilities of LDAP as a centralized Address Book. I have setup a openLDAP on debian 5.07. I managed to search the LDAP contacts from MS Outlook 2007 (with some drawbacks like Outlook cant recognize street and organization fields). My question is, is it possible ,& how, to sync data on LDAP server with applications that support LDAP? I could not find any data on this topic. EDIT: The point is, to have a centralized list of contacts that can be sync with various applications, for instance Outlook, Thunderbird, Phonebook on mobile phones...etc The Question is, Is it possible to transfer (update) data on a client application from LDAP database and viceversa? So not to search LDAP server data, but to download contact that are not available in the client application (Outlook) and upload data to LDAP if the contact is in (Outlook) and not in LDAP database and the other way around, in other words synchronize.

    Read the article

  • Calculating IOPS for a single HDD - what am I doing wrong?

    - by red888
    So I know there is no standardized way of calculating IOPS for a HDD, but from everything I have read it appears one of the most accurate formulas is the following: IOP/ms = + {rotational latency} + ({block size} / {data transfer rate}) Which is IOs per millisecond or what the book I've been reading calls "Disk Service Time". Also rotational latency is calculated as half of one rotation in milliseconds. This was taken from the EMC book "Information Storage and Management" -arguably a pretty reliable source right\wrong? Putting this formula into practice consider this Seagate data sheet. I am going to calculate IOPS for the ST3000DM001 model for a block size of 4kb: Seek Average (Write) = 9.5 -I'll measuring IOPS for writes Spindle speed = 7200rpm Average Data Rate = 156MB/s So my variables are: Seek Time = 9.5ms Rotational latency = (.5 / (7200rpm / 60)) = 0.004s = 4ms Data Rate = 156MB/s = (0.156MB/ms / 0.004MB) = 39 9.5ms + 4ms + 39 = IO/ms 52.5 1 / (52.5 * 0.001) = 19 IOPS 19 IOPS for this drive clearly is not right so what am I doing wrong?

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >