Search Results

Search found 7566 results on 303 pages for 'jre lib ext'.

Page 266/303 | < Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >

  • Finding Bluetooth link key in Windows 7, to double pair a device on dualboot computer

    - by Ilari Kajaste
    How can I dig up the Bluetooth link key for a paired device in Windows 7? Is this something that is dependent on the Bluetooth stack I'm using (Toshiba), or is there a generic place to store these in Windows 7? Note: I'm not talking about the six-digit code usually typed by the user during pairing - that is worthless since it's discarded after pairing process. What I mean is the 128-bit link key that the devices exchange during pairing, and use thereafter to encrypt all their Bluetooth traffic. Background: I dualboot Windows 7 / Ubuntu on my laptop, and I would like to have my phone paired to both OS's. Since the dualbooting computer has only one Bluetooth adapter and thus only one Bluetooth address, I cannot do two pairings to the phone, since on the second pairing (Windows) the phone just replaces the previous pairing (Linux) to the same Bluetooth address. A thread on Ubuntu forums pointed me to what I have to do - pair first on Linux, then on Windows, and then replace the link key on Linux side with the one Windows negotiated. I can find the Linux side pairing key from /var/lib/Bluetooth/[BD_ADDR]/linkkeys - no problems there. However, on Windows side I can't find the key. According to the forum post, on Windows side the key should be in SYSTEM\ControlSet002\services\BTHPORT\Parameters\Keys\[BD_ADDR] but while that registry key does exist, it has no subkeys. (And a similar registry path in ControlSet001 didn't have any subkeys either.) One thing I've been instructed to do is to capture all events during pairing with Sysinternals Process Monitor. I did this, but I haven't been able to find any useful information from the captured events, not even by exporting the data to a huge XML and grepping that with the BD_ADDRs (with or without colons). So how could I find the link key for a paired device in Windows 7? Some reference information: Wikipedia: Bluetooth, Security Now: Bluetooth security

    Read the article

  • Fresh 12.04 Install - mySQL not starting

    - by Lee Armstrong
    I have a freshly installed Ubuntu 12.04 x64 server and I installed Percona server from their official repositories. Trouble is it will not start! mysql-error.log shows nothing obvious. 121129 12:16:54 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql/ 121129 12:16:54 [Note] Plugin 'FEDERATED' is disabled. 121129 12:16:54 InnoDB: The InnoDB memory heap is disabled 121129 12:16:54 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121129 12:16:54 InnoDB: Compressed tables use zlib 1.2.3 121129 12:16:54 InnoDB: Using Linux native AIO 121129 12:16:54 InnoDB: Initializing buffer pool, size = 12.0G 121129 12:16:54 InnoDB: Completed initialization of buffer pool 121129 12:16:54 InnoDB: highest supported file format is Barracuda. 121129 12:16:55 InnoDB: Waiting for the background threads to start 121129 12:16:56 Percona XtraDB (http://www.percona.com) 1.1.8-rel29.1 started; log sequence number 1598476 121129 12:16:56 [Note] Server hostname (bind-address): '0.0.0.0'; port: 3306 121129 12:16:56 [Note] - '0.0.0.0' resolves to '0.0.0.0'; 121129 12:16:56 [Note] Server socket created on IP: '0.0.0.0'. 121129 12:16:56 [Note] Event Scheduler: Loaded 0 events 121129 12:16:56 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.5.28-29.1-log' socket: '/var/run/mysqld/mysql.sock' port: 3306 Percona Server (GPL), Release 29.1 121129 12:16:56 [Note] Event Scheduler: scheduler thread started with id 1 And the syslog shows... Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: 0 processes alive and '/usr/bin/mysqladmin --defaults-file=/etc/mysql/debian.cnf ping' resulted in Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: #007/usr/bin/mysqladmin: connect to server at 'localhost' failed Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: error: 'Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)' Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: Check that mysqld is running and that the socket: '/var/run/mysqld/mysqld.sock' exists! Nov 29 12:17:07 V-PF-SQL1 /etc/init.d/mysql[2206]: The socket file is being created and I can access the server NOT using the socket using mysql -h 127.0.0.1 -P 3306 -u root --pPASSWORD

    Read the article

  • Mail not piping in postfix

    - by user220912
    I have setup a postfix server and wanted to test the piping of mail to my perl script where i can make use of it and filter the mails.I wrote a test script for that which just logs the information in txt file. but i don't see any changes on sending the mail. My postconf-n output: alias_database = hash:/etc/aliases append_dot_mydomain = no command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 html_directory = no inet_interfaces = all inet_protocols = all mail_owner = postfix mailbox_size_limit = 0 mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = yantratech.co.in, localhost.localdomain, localhost myhostname = tcmailer8.in mynetworks = 103.8.128.62, 103.8.128.69/101, 168.100.189.0/28, 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.6.6/README_FILES recipient_delimiter = + relayhost = sample_directory = /usr/share/doc/postfix-2.6.6/samples sendmail_path = /usr/sbin/sendmail.postfix smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu) smtpd_tls_cert_file = /etc/pki/tls/certs/tcmailer8.in.cert smtpd_tls_key_file = /etc/pki/tls/private/localhost.key smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache smtpd_use_tls = yes transport_maps = hash:/etc/postfix/transport virtual_alias_maps = hash:/etc/postfix/virtual virtual_gid_maps = static:5000 virtual_mailbox_base = /home/vmail virtual_mailbox_domains = /etc/postfix/vhosts virtual_mailbox_maps = hash:/etc/postfix/vmaps virtual_minimum_uid = 1000 virtual_uid_maps = static:5000 here's my transport: [email protected] email_route my main.cf declaration: transport_maps = hash:/etc/postfix/transport my master.cf declaration: email_route unix - n n - - pipe flags=FR user=nobody argv=/etc/postfix/test.php -f $(sender) -- $(recipient) and my php script: #!/usr/bin/php <?php $fh = fopen('/etc/postfix/testmail.txt','a'); fwrite($fh, "Hello it works\n"); fclose($fh); ?> I am sending mails through telnet in localhost.

    Read the article

  • Windows Vista/7 dropping Mac Server share points

    - by Hooligancat
    My Windows Vista and Windows 7 clients are having problems maintaining access to SMB shares on a Mac server. The initial connection to the server appears to be OK, as the Windows clients can see all of the server share points. However, the client randomly drops a couple of the server share points although the clients can still see the server. For example. If I have the following share points on the Mac server: Share A Share B Share C Share D Share E The Windows client can see these shares most of the time and can access them most of the time. But randomly a couple of the shares will just get dropped or go missing from the Windows client's ability to view them so I end up with something like: Share B Share D Share E All the share points are established int the same way with the same permission settings. My Mac OSX Server is set up with the following for SMB: SMB sharing enabled Standalone Server Workgroup of `CORPORATE` Allow Guest Access = YES Client connections limit = 100 Authentication: NTLMv2 & Kerberos and NTLM Code Page is Latin US (437) This is a workgroup master browser WINS registration is set to Enable WINS server (tried with setting off) Enable virtual share points for homes YES I noticed in my SMB file service log that the clients appear to connect OK, but I get the following error which implies a reset by either the server or the client: /SourceCache/samba/samba-187.9/samba/source/lib/util_sock.c:read_data(534) read_data: read failure for 4 bytes to client 192.168.0.99. = Connection reset by peer I am a bit stumped as to a direction to turn to try and get this to resolve. Continued attempts to access the server from the client will reconnect to the share points, but they inevitably get dropped again in the near future. Any and all help much appreciated.

    Read the article

  • Syntax error at '{'; expected '}' when using nagios in puppet

    - by jiangchengwu
    It's a big problem to me, because I'm not familiar with puppet. ERROR on the puppetmaster: debug: importing '/etc/puppet/manifests/nodes/group-1.pp' err: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 ERROR on the puppet client: err: Could not retrieve catalog from remote server: Error 400 on SERVER: Could not parse for environment production: Syntax error at '{'; expected '}' at /etc/puppet/manifests/nodes/group-1.pp:6 in group-1.pp: node 'group1' { include ntp class { 'nagios::host': #this is line 6 nodename => $clientcert, appname => 'test', } } nagios::host in module module/nagios/host.pp code are here: class nagios::host($nodename, $hostgroup) { file { '/usr/lib/nagios/plugins': mode = "755", require = Package["nagios-plugins"], } ... @@nagios_service { "${nodename}_check_ssh": ensure => present, use => 'generic-service', host_name => "${nodename}", notification_interval => 60, flap_detection_enabled => 0, service_description => "SSH", check_command => "check_ssh", target => "/etc/nagios3/services.d/${nodename}.cfg", } } and the file module/nagios/init.pp is blank How could I fix it ?

    Read the article

  • Multiple logins with pam_mount means multiple (redundant) mounts ...

    - by Jamie
    I've configured pam_mount.so to automagically mount a cifs share when users login; the problem is if a user logs into multiple times simultaneously, the mount command is repeated multiple times. This so far isn't a problem but it's messy when you look at the output of a mount command. # mount /dev/sda1 on / type ext4 (rw,errors=remount-ro) proc on /proc type proc (rw,noexec,nosuid,nodev) none on /sys type sysfs (rw,noexec,nosuid,nodev) none on /sys/fs/fuse/connections type fusectl (rw) none on /sys/kernel/debug type debugfs (rw) none on /sys/kernel/security type securityfs (rw) none on /dev type devtmpfs (rw,mode=0755) none on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=0620) none on /dev/shm type tmpfs (rw,nosuid,nodev) none on /var/run type tmpfs (rw,nosuid,mode=0755) none on /var/lock type tmpfs (rw,noexec,nosuid,nodev) none on /lib/init/rw type tmpfs (rw,nosuid,mode=0755) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) //srv1/UserShares/jrisk on /home/jrisk type cifs (rw,mand) I'm assuming I need to fiddle with either the pam.d/common-auth file or pam_mount.conf.xml to accomplish this. How can I instruct pam_mount.so to avoid duplicate mountings?

    Read the article

  • 3 simple questions about file permissions

    - by Camran
    1- Wonder, is this a good setup of permissions in the /var directory? drwxr-xr-x 2 root root 4096 2010-05-30 03:34 backups drwxr-xr-x 7 root root 4096 2010-05-29 17:55 cache drwxr-xr-x 29 root root 4096 2010-05-29 17:55 lib drwxrwsr-x 2 root staff 4096 2009-07-14 04:36 local drwxrwxrwt 3 root root 60 2010-06-02 03:34 lock drwxr-xr-x 9 root root 4096 2010-06-02 03:34 log drwxrwsr-x 2 root man 4096 2009-09-20 20:36 mail drwxr-xr-x 2 root root 4096 2009-09-20 20:36 opt drwxrwxrwt 12 root root 420 2010-06-02 12:12 run drwxr-xr-x 4 root root 4096 2009-09-20 20:37 spool drwxrwxrwt 2 root root 4096 2009-07-14 04:36 tmp drwxr-xr-x 14 user root 4096 2010-05-30 22:21 www 2- Could you give me a brief explanation of the columns above? First one is which permissions they have. Second is a nr. Third and fourth says "root root" for example. fifth is another nr (4096 for example). and the others are obvious. 3- Could you give me a brief explanation of the folders above? Especially the "lock" and "tmp" folders. Lock contains an apache2 folder which seems empty. Thanks

    Read the article

  • Glassfish v3 failure when startup. "Cannot allocate memory "

    - by Shisoft
    It is clear in this Question Fail to start Glassfish 3.1: java.io.IOException: error=12, Cannot allocate memory But in my case,I have a 512M memory Ubuntu 10.04 vps.It seems that I don't need to change any configure.But when start the server,I got this exception VM failed to start: java.io.IOException: Cannot run program "/usr/lib/jvm/java-6-sun-1.6.0.22/bin/java" (in directory "/home/glassfish/glassfish/domains/domain1/config"): java.io.IOException: error=12, Cannot allocate memory So,I set <jvm-options>-Xmx512</jvm-options> to <jvm-options>-Xmx400</jvm-options> The exception remains.What did I do something wrong? result of free -m total used free shared buffers cached Mem: 512 43 468 0 0 0 -/+ buffers/cache: 43 468 Swap: 0 0 0 result of cat /proc/user_beancounters Version: 2.5 uid resource held maxheld barrier limit failcnt 146049: kmemsize 2670652 5385253 51200000 51200000 0 lockedpages 0 8 2048 2048 0 privvmpages 11134 134522 131200 262200 4 shmpages 648 1352 128000 128000 0 dummy 0 0 0 0 0 numproc 12 73 500 500 0 physpages 6519 28162 0 200000000 0 vmguarpages 0 0 512000 512000 0 oomguarpages 6527 28169 512000 512000 0 numtcpsock 4 14 4096 4096 0 numflock 0 5 2048 2048 0 numpty 1 2 32 32 0 numsiginfo 0 3 1024 1024 0 tcpsndbuf 159600 265744 20480000 20480000 0 tcprcvbuf 65536 3590352 20480000 20480000 0 othersockbuf 44232 90640 20480000 20480000 0 dgramrcvbuf 0 12848 10240000 10240000 0 numothersock 22 31 2048 2048 0 dcachesize 0 0 10240000 10240000 0 numfile 1002 1474 50000 50000 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 24 24 2048 2048 0 Thanks

    Read the article

  • Hadoop streaming job on EC2 stays in "pending" state

    - by liamf
    Trying to experiment with Hadoop and Streaming using cloudera distribution CDH3 on Ubuntu. Have valid data in hdfs:// ready for processing. Wrote little streaming mapper in python. When I launch a mapper only job using: hadoop jar /usr/lib/hadoop/contrib/streaming/hadoop-streaming*.jar -file /usr/src/mystuff/mapper.py -mapper /usr/src/mystuff/mapper.py -input /incoming/STBFlow/* -output testOP hadoop duly decides it will use 66 mappers on the cluster to process the data. The testOP directory is created on HDFS. A job_conf.xml file is created. But the job tracker UI at port 50030 never shows the job moving out of "pending" state and nothing else happens. CPU usage stays at zero. (the job is created though) If I give it a single file (instead of the entire directory) as input, same result (except Hadoop decides it needs 2 mappers instead of 66). I also tried using the "dumbo" Python utility and launching jobs using that: same result: permanently pending. So I am missing something basic: could someone help me out with what I should look for? The cluster is on Amazon EC2. Firewall issues maybe: ports are enabled explicitly, case by case, in the cluster security group.

    Read the article

  • fglrx-legacy-driver not seeing Radeon HD 4650 AGP

    - by Rocket Hazmat
    I am running Debian Squeeze on an old Dell Dimension 8300 box. It has an AGP Radeon HD 4650 card. I use this machine to mine bitcoins, and today I noticed that the machine had rebooted! My precious uptime! Anyway, my miner wouldn't start, so I figured might as well update my graphics driver, maybe that would fix the issue. I went to amd.com and downloaded the newest driver (12.6 legacy), but after installing it, aticonfig gave an error: aticonfig: No supported adapters detected I uninstalled the driver and figured I'd try to install it from apt. AMD has dropped support for the HD 4000 series in fglrx, forcing me to use fglrx-legacy-driver (currently only in experimental). In order to install this, I had to update libc6 (and some other important packages, like gcc), I had to use their wheezy versions. I finally got fglrx-legacy-driver installed, but I still got: aticonfig: No supported adapters detected Why isn't the driver finding my video card? I have a hunch it has something to do with the fact that it's an AGP video card. Here is the output of lspci -v (why does it say Kernel driver in use: fglrx_pci?): 01:00.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RV730 Pro AGP [Radeon HD 4600 Series] (prog-if 00 [VGA controller]) Subsystem: Advanced Micro Devices [AMD] nee ATI Device 0028 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 16 Memory at e0000000 (32-bit, prefetchable) [size=256M] I/O ports at de00 [size=256] Memory at fe9f0000 (32-bit, non-prefetchable) [size=64K] Expansion ROM at fea00000 [disabled] [size=128K] Capabilities: [50] Power Management version 3 Capabilities: [58] AGP version 3.0 Kernel driver in use: fglrx_pci EDIT: fglrx 12.4 seems to work. Thing is, since I am on kernel 3.2, I need to apply this patch to common/lib/modules/fglrx/build_mod/firegl_public.c. I thought ATI dropped support for the 4xxx series after 12.4. Why doesn't 12.6 legacy work?

    Read the article

  • How to locate phpmyadmin on ubuntu

    - by Chris
    Okay, I'm usually a windows user and I write quite happily there, unfortunately (or fortunately) I have installed linux on a dual boot and having installed some software I have a question... Where is it? I installed Apache, PHP, MySQL and separately phpmyadmin, Apache is up and running, I've seen my phpinfo page and MySQL is there. MySQL is telling me that there's a database for phpmyadmin, but...erm.. I can't seem to locate it. On a windows machine the directory would be in the www directory and I'd just navigate there... localhost/phpmyadmin/ but on Ubuntu I can't find it in the equivalent. I've been to /var/www/ and there's my index.html (from apache) and my phptest.php file but no phpmyadmin. There is a phpmyadmin in /lib but that only has 2 files in it. So having rambled lots, my question is, what do I have to do to be able to navigate to the phpmyadmin index page? I realise this could fall under the description of a server related question and should be posted elsewhere but as it's software on a home system some help would be appreciated. Do I need to move some files from somewhere? Help! I really don't want to have to go back to developing on Windows as I'll be deploying to a lamp system, my learning curve will be steep.

    Read the article

  • ldap-authentication without sambaSamAccount on linux smb/cifs server (e.g. samba)

    - by umlaeute
    i'm currently running samba-3.5.6 on a debian/wheezy host to act as the fileserver for our department's w32-clients. authentication is done via OpenLDAP, where each user-dn has an objectclass:sambaSamAccount that holds the smb-credentials and an objectclass:shadowAccount/posixAccount for "ordinary" authentication (e.g. pam, apache,...) now we would like to dump our department's user-db, and instead use authenticate against the user-db of our upstream-organisation. these user-accounts are managed in a novell-edirectory, which i can already use to authenticate using pam (e.g. for ssh-logins; on another host). our upstream organisation provides smb/cifs based access (via some novell service) to some directories, which i can access from my linux client via smbclient. what i currently don't manage to do is to use the upstream-ldap (the eDirectory) to authenticate our institution's samba: i configured my samba-server to auth against the upstream ldap server: passdb backend = ldapsam:ldaps://ldap.example.com but when i try to authenticate a user, i get: $ smbclient -U USER \\\\SMBSERVER\\test Enter USER's password: Domain=[WORKGROUP] OS=[Unix] Server=[Samba 3.6.6] tree connect failed: NT_STATUS_ACCESS_DENIED the logfiles show: [2012/10/02 09:53:47.692987, 0] passdb/secrets.c:350(fetch_ldap_pw) fetch_ldap_pw: neither ldap secret retrieved! [2012/10/02 09:53:47.693131, 0] lib/smbldap.c:1180(smbldap_connect_system) ldap_connect_system: Failed to retrieve password from secrets.tdb i see two problems i'm having: i don't have any administrator password for the upstream ldap (and most likely, they won't give me one). i only want to authenticate my users, write-access is not needed at all. can i go away with that? the upstream ldap does not have any samba-related attributes in the db. i was under the impression, that for samba to authenticate, those attributes are required, as smb/cifs uses some trivial hashing which is not compatible with the usual posixAccount hashes. is there a way for my department's samba server to authenticate against such an ldap server?

    Read the article

  • No sound out of headphone port

    - by Thanatos
    I cannot get sound out of the headphone port. Headphones are plugged in, and sound comes out of the internal speakers. Windows behaves normally (sound switches to headphones when headphones are inserted). It did work in Linux at one point, but something changed, we're just not sure what. Rebooting doesn't fix. This appears to occur whether or not PulseAudio is running. Things I've tried: Rebooting. No effect. Booting into Windows. It works properly, so probably not a hardware issue. All of alsamixer. My only controls are this: "Master" Volume bar & mutable, unmuted. Controls volume. "PCM" Volume bar only. 100%. "S/PDIF" Mutable only, currently muted, has no effect. "S/PDIF" Default PCM", Mutable only, currently unmuted, has no effect. Killing PulseAudio. No effect. (It also won't stay dead! Something appears to be restarting it, and I can't tell what, but it is annoying as fuck.) alsactl init 0, no effect. sudo rm -f /var/lib/alsa/asound.state, no effect. General system info: Ubuntu 10.04 LTS Toshiba Satellite T135D-S1324 lspci says I have: 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 01:05.1 Audio device: ATI Technologies Inc RS780 Azalia controller

    Read the article

  • How to tell nginx to honor backend's cache?

    - by ChocoDeveloper
    I'm using php-fpm with nginx as http server (I don't know much about reverse proxies, I just installed it and didn't touch anything), without Apache nor Varnish. I need nginx to understand and honor the http headers I send. I tried with this config (taken from the docs) but didn't work: /etc/nginx/nginx.conf: fastcgi_cache_path /var/lib/nginx/cache levels=1:2 keys_zone=website:10m inactive=10m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; /etc/nginx/sites-available/website: server { fastcgi_cache website; #fastcgi_cache_valid 200 302 1h; #fastcgi_cache_valid 301 1d; #fastcgi_cache_valid any 1m; #fastcgi_cache_min_uses 1; #fastcgi_cache_use_stale error timeout invalid_header http_503; add_header X-Cache $upstream_cache_status; } I always get "MISS" and the cache dir is empty. If I uncomment the other directives, I get hit, but I don't want those "dumb" settings, I need to control them within my backend. For example, if my backend says "public, s-maxage=10", the cache should be considered stale after 10 secs. Instead, nginx will store it for 1h, because of these directives. I was thinking whether I should try proxy_cache, not sure what's the difference. In both fastcgi and proxy modules docs it says this: The cache honors backend's Cache-Control, Expires, and etc. since version 0.7.48, Cache-Control: private and no-store only since 0.7.66, though. Vary handling is not implemented. nginx version: nginx/1.1.19 Any thoughts? pd: I also have the reverse proxy that is offered by Symfony2 (which I turn off to use nginx's). The headers are interpreted correctly by it, so I think I'm doing it right.

    Read the article

  • Rails 3 + Nginx + Passenger -- Routing index

    - by Bijan
    I have no index.html file in my public folder. My rails routes file routes this, and it works fine when I run 'rails server' on my machine. I'm trying to deploy the app. I have passenger and nginx running When I run rails server on my local machine, it works fine. But it's just trying to access static file when I try to access it on the production server. Here's my nginx conf: worker_processes 1; #pid logs/nginx.pid; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.2; passenger_ruby /usr/bin/ruby; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name mmjconsult.com; root /www/mmjs/public; access_log logs/host.access.log; passenger_enabled on; } } Thank you for any help. I really appreciate it.

    Read the article

  • Decrease in disk performance after partitioning and encryption, is this much of a drop normal?

    - by Biohazard
    I have a server that I only have remote access to. Earlier in the week I repartitioned the 2 disk raid as follows: Filesystem Size Used Avail Use% Mounted on /dev/mapper/sda1_crypt 363G 1.8G 343G 1% / tmpfs 2.0G 0 2.0G 0% /lib/init/rw udev 2.0G 140K 2.0G 1% /dev tmpfs 2.0G 0 2.0G 0% /dev/shm /dev/sda5 461M 26M 412M 6% /boot /dev/sda7 179G 8.6G 162G 6% /data The raid consists of 2 x 300gb SAS 15k disks. Prior to the changes I made, it was being used as a single unencrypted root parition and hdparm -t /dev/sda was giving readings around 240mb/s, which I still get if I do it now: /dev/sda: Timing buffered disk reads: 730 MB in 3.00 seconds = 243.06 MB/sec Since the repartition and encryption, I get the following on the separate partitions: Unencrypted /dev/sda7: /dev/sda7: Timing buffered disk reads: 540 MB in 3.00 seconds = 179.78 MB/sec Unencrypted /dev/sda5: /dev/sda5: Timing buffered disk reads: 476 MB in 2.55 seconds = 186.86 MB/sec Encrypted /dev/mapper/sda1_crypt: /dev/mapper/sda1_crypt: Timing buffered disk reads: 150 MB in 3.03 seconds = 49.54 MB/sec I expected a drop in performance on the encrypted partition, but not that much, but I didn't expect I would get a drop in performance on the other partitions at all. The other hardware in the server is: 2 x Quad Core Intel(R) Xeon(R) CPU E5405 @ 2.00GHz and 4gb RAM $ cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 32 Lun: 00 Vendor: DP Model: BACKPLANE Rev: 1.05 Type: Enclosure ANSI SCSI revision: 05 Host: scsi0 Channel: 02 Id: 00 Lun: 00 Vendor: DELL Model: PERC 6/i Rev: 1.11 Type: Direct-Access ANSI SCSI revision: 05 Host: scsi1 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: CD-ROM GCR-8240N Rev: 1.10 Type: CD-ROM ANSI SCSI revision: 05 I'm guessing this means the server has a PERC 6/i RAID controller? The encryption was done with default settings during debian 6 installation. I can't recall the exact specifics and am not sure how I go about finding them? Thanks

    Read the article

  • How do I get a network printer installed in ubuntu 9.04?

    - by SoaperGEM
    My girlfriend's work computer now has Linux on one of the partitions, and for the most part it's running fine--except that I can't seem to get the network printers configured right. There are two of them: a Lanier MP 7500/LD275 and a Lanier MP C3000/LD430c, and Linux seems to have found them both automatically. I'll go through the steps of what I did, and what exactly went wrong. I went to Administration Printing, and clicked the new printer button. It searched for printers and found them both, listed under "Network Printers." I added them as new printers in succession. However, when I clicked "Print a Test Page," it failed saying there was a broken pipe. The device URIs were saved as socket://[ip address]:9100. I changed these to lpd://[ip address] per some online tutorial, which at first looked like it might have worked (but didn't). Then when I tried to print a test page, it first said Processing (and sometimes even Processing - printing test page, 4%, but always subsequently displays Idle - /usr/lib/cups/backend/lpd failed. Help! What do I do? It seems like Linux can find these printers just fine, and the drivers seem to be in place, so what's going wrong?

    Read the article

  • tftpd starts randomly

    - by Mutant
    A few days ago my Little Snitch filter starts popping up tftpd. I'd never seen this before, so I immediately start freaking out thinking my Mac has been compromised. I can't find anything unusual on the system. The process usually dies before I can trace it (little snitch never allowed the connection just left the popup up). I finally caught it once, and found this: [10:32]: sudo lsof -nlP | fgrep tftp Password: tftpd 1924 18446744 cwd DIR 1,3 1326 2 / tftpd 1924 18446744 txt REG 1,3 29856 163979456 /usr/libexec/tftpd tftpd 1924 18446744 txt REG 1,3 600576 163686622 /usr/lib/dyld tftpd 1924 18446744 txt REG 1,3 303300608 189014898 /private/var/db/dyld/dyld_shared_cache_x86_64 tftpd 1924 18446744 0u IPv4 0x34a76100fcbb06e3 0t0 UDP *:55818 tftpd 1924 18446744 2u IPv4 0x34a76100f1113c53 0t0 UDP *:69 [10:32]: ps ax | fgrep 1924 1924 ?? S 0:00.00 /usr/libexec/tftpd -i /private/tftpboot 1949 s000 S+ 0:00.00 fgrep 1924 For the life of me I can't figure out what is starting this. Nothing in cron, launchdaemons, etc. Google searches haven't yielded much either. The connection IP is different each time. So my question is: Has anyone seen anything like this before?

    Read the article

  • deploy git project and permission issue

    - by nixer
    I have project hosted with gitolite on my own server, and I would like to deploy the whole project from gitolite bare repository to apache accessible place, by post-receive hook. I have next hook content echo "starting deploy..." WWW_ROOT="/var/www_virt.hosting/domain_name/htdocs/" GIT_WORK_TREE=$WWW_ROOT git checkout -f exec chmod -R 750 $WWW_ROOT exec chown -R www-data:www-data $WWW_ROOT echo "finished" hook can't be finished without any error message. chmod: changing permissions of `/var/www_virt.hosting/domain_name/file_name': Operation not permitted means that git has no enough right to make it. The git source path is /var/lib/gitolite/project.git/, which is owned by gitolite:gitolite And with this permissions redmine (been working under www-data user) can't achieve git repository to fetch all changes The whole project should be placed here: /var/www_virt.hosting/domain_name/htdocs/, which is owned by www-data:www-data. What changes I should do, to work properly post-receive hook in git, and redmine with repository ? what I did, is: # id www-data uid=33(www-data) gid=33(www-data) groups=33(www-data),119(gitolite) # id gitolite uid=110(gitolite) gid=119(gitolite) groups=119(gitolite),33(www-data) does not helped. I want to have no any problem to work apache (to view project), redmine to read source files for project (under git) and git (doing deploy to www-data accessible path) what should I do ?

    Read the article

  • Error authenticating git repository with Redmine

    - by woni
    I've setup Redmine 2.1 on my Debian Squeeze server following this Tutorial HowTo configure Redmine for advanced git integration (I tried to use the grack path). Redmine server is running properly, but I have a problem granting users access to git repositories. When I try to clone a repository it says: error: The requested URL returned error: 500 while accessing The apache error.log shows this entry: [Fri Sep 28 15:50:56 2012] [crit] [client xx.xx.xx.xx] configuration error: couldn't check user. Check your authn provider!: /repo.git/info/refs It also asks me for user and password when cloning, but it shouldn't if I understand the tutorial right. I'm using the Redmine authentication module: <VirtualHost *:80> ServerName my.server.at DocumentRoot "/var/www/my.server.at/public" PerlLoadModule Apache::Redmine <Directory "/var/www/my.server.at/public"> Options None AllowOverride None Order allow,deny Allow from all </Directory> SetEnv REMOTE_USER=$REDIRECT_REMOTE_USER" SetEnv GIT_PROJECT_ROOT /var/git/my.server.at/ SetEnv GIT_HTTP_EXPORT_ALL ScriptAlias /git/ /usr/lib/git-core/git-http-backend <Location /> Order allow,deny Allow from all AuthType Basic AuthName Git Require valid-user AuthBasicAuthoritative Off AuthUserFile /dev/null AuthGroupFile /dev/null PerlAccessHandler Apache::Authn::Redmine::access_handler PerlAuthenHandler Apache::Authn::Redmine::authen_handler RedmineDSN "DBI:mysql:database=redmine;host=localhost" RedmineDbUser "user" RedmineDbPass "password" RedmineGitSmartHttp yes </Location> </VirtualHost> Can someone help me please and explain the error and what I can do to solve my problem?

    Read the article

  • PHP pages are not parsed by Apache on CentOS

    - by Ram
    I have installed Centos 5.x, Apache 2.2, PHP 5.3 and MySQL 5.5. I also installed phpMyAdmin. I am able to access phpMyAdmin through the browser without any issues. However, when I create a simple index.php with phpinfo() function in the default directory, that page is served without php parsing. As we all know, phpMyAdmin is a php application. This is working fine from the same server but not the simple php page from the doc root directory ??!!!. Of course, I tried moving this page into phpMyAdmin folder and tried accessing it, but no success. Please note that I updated httpd.conf file with appropriate directives based on the php installation guide.Following directives were added to httpd.conf. AddTyoe application/x-httpd-php LoadModule php5_module /usr/lib/httpd/modules/libphp5.so <FilesMatch "\.php$"> SetHandler application/x-httpd-php </FilesMatch> File locations are: docroot - /var/www/html phpMyAdmin folder - /var/www/html/phpMyAdmin File privileges are: [root@linuxdev1 html]# ls -Z -rwxr-xr-x root root index.php drwxr-xr-x root root phpMyAdmin -rw-r--r-- root root phpMyAdmin-3.4.3.2-english.tar.gz drwxr-xr-x root root test1 Any help is appreciated.

    Read the article

  • disk space keeps filling up on EC2 instance with no apperent files/directories

    - by sasher
    How come os shows 6.5G used but I see only 3.6G in files/directories? Running as root on an Amazon Linux AMI (seems like Centos), lots of free memory available, no swapping going on, no apparent file descriptors issue. The only thing I can think of is a log file that was deleted while applications append to it. Disk space usage is slowly but continuously rising towards full capacity (~1k/min with very small decreases from time to time) Any explanation? Solution? du --max-depth=1 -h / 1.2G /usr 4.0K /cgroup 22M /lib64 11M /sbin 19M /etc 52K /dev 2.1G /var 4.0K /media 0 /sys 4.0K /selinux du: cannot access /proc/14024/task/14024/fd/4': No such file or directory du: cannot access<br/> /proc/14024/task/14024/fdinfo/4': No such file or directory du: cannot access /proc/14024/fd/4': No such file or directory du: cannot<br/> access/proc/14024/fdinfo/4': No such file or directory 0 /proc 18M /home 4.0K /logs 8.1M /bin 16K /lost+found 12M /tmp 4.0K /srv 35M /boot 79M /lib 56K /root 67M /opt 4.0K /local 4.0K /mnt 3.6G / df -h Filesystem Size Used Avail Use% Mounted on /dev/xvda1 7.9G 6.5G 1.4G 84% / tmpfs 3.7G 0 3.7G 0% /dev/shm sysctl fs.file-nr fs.file-nr = 864 0 761182

    Read the article

  • Tuning up a MySQL server

    - by NinjaCat
    I inherited a mysql server, and so I've started with running the MySQLTuner.pl script. I am not a MySQL expert but I can see that there is definitely a mess here. I'm not looking to go after every single thing that needs fixing and tuning, but I do want to grab the major, low hanging fruit. Total Memory on the system is: 512MB. Yes, I know it's low, but it's what we have for the time being. Here's what the script had to say: General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance MySQL started within last 24 hours - recommendations may be inaccurate Enable the slow query log to troubleshoot bad queries When making adjustments, make tmp_table_size/max_heap_table_size equal Reduce your SELECT DISTINCT queries without LIMIT clauses Increase table_cache gradually to avoid file descriptor limits Your applications are not closing MySQL connections properly Variables to adjust: query_cache_limit (> 1M, or use smaller result sets) tmp_table_size (> 16M) max_heap_table_size (> 16M) table_cache (> 64) innodb_buffer_pool_size (>= 326M) For the variables that it recommends that I adjust, I don't even see most of them in the mysql.cnf file. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] innodb_buffer_pool_size = 220M innodb_flush_log_at_trx_commit = 2 innodb_file_per_table = 1 innodb_thread_concurrency = 32 skip-locking big-tables max_connections = 50 innodb_lock_wait_timeout = 600 slave_transaction_retries = 10 innodb_table_locks = 0 innodb_additional_mem_pool_size = 20M user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking bind-address = localhost key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 4 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M skip-locking innodb_file_per_table = 1 big-tables [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/

    Read the article

  • Size of modules within initrd

    - by LiKao
    I am currently trying to manually replace the kernel within ubuntu-core on an embedded device with a custom kernel. However when I try to update the initrd my initrd becomes much bigger. Here is what I did: Extract the initrd that was shipped with ubuntu Make a list of all modules within the old initrd get the same modules from the new module director at /lib/modules/new_kernel_version add these modules to the initrd and remove the old ones If I do this my initrd becomes quite bigger than the original one, so I checked the individual modules. I picked the btrfs.ko filesystem driver as an example. So by comparing these two modules I noticed the one I just picked into the initrd was much bigger, which caused the difference in size. -rw-r--r-- 1 root root 999K Nov 14 15:06 btrfs.ko For the btrfs.ko within the shipped initrd. -rw-r--r-- 1 root root 7.2M Nov 14 15:08 btrfs.ko For the new btrfs.ko. What is different between these two modules? Could this be caused by some faulty setting for the new kernel? When producing the kernel I copied /proc/config.gz and used make oldconfig to update it, so all optimisations should be the same for both kernels. Or is there something else which is being done to the modules before they are put into the initrd? Maybe is there even some better way to build a new initrd for the new kernel in ubuntu altogether. Update: I just also tested with an initrd which I created from scratch using the mkinitrfs command within ubuntu, and it has the same size difference that I found for the initrd I manually updated.

    Read the article

  • Securing phpmyadmin: non-standard port + https

    - by elect
    Trying to secure phpmyadmin, we already did the following: Cookie Auth login firewall off tcp port 3306. running on non-standard port Now we would like to implement https... but how could it work with phpmyadmin running already on a non-stardard port? This is the apache config: # PHP MY ADMIN <VirtualHost *:$CUSTOMPORT> Alias /phpmyadmin /usr/share/phpmyadmin <Directory /usr/share/phpmyadmin> Options FollowSymLinks DirectoryIndex index.php <IfModule mod_php5.c> AddType application/x-httpd-php .php php_flag magic_quotes_gpc Off php_flag track_vars On php_flag register_globals Off php_value include_path . </IfModule> </Directory> # Disallow web access to directories that don't need it <Directory /usr/share/phpmyadmin/libraries> Order Deny,Allow Deny from All </Directory> <Directory /usr/share/phpmyadmin/setup/lib> Order Deny,Allow Deny from All </Directory> # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog ${APACHE_LOG_DIR}/phpmyadmin.log combined </VirtualHost>

    Read the article

< Previous Page | 262 263 264 265 266 267 268 269 270 271 272 273  | Next Page >