Daily Archives

Articles indexed Sunday November 13 2011

Page 9/12 | < Previous Page | 5 6 7 8 9 10 11 12  | Next Page >

  • Nginx Rewrite rules for clean URLs

    - by Sujay
    I want to write nginx rewrite rules for clean URLs. Everytime the user hits http://domain.com/abc/12/16/abc-def-ghi I need to execute domain.com/abc.php?a=12&b=16&c=abc-def-ghi. Now my regex is right as per rubular: ^\/abc\/(\d+)\/(\d+)\/(\w+\S+)$ http://rubular.com/regexes/11063 and rule is if (!-e $request_filename) { rewrite ^\/abc\/(\d+)\/(\d+)\/(\w+\S+)$ abc.php?a=$1&b=$2&c=$3 last; } But it is giving "No input File specified". I cant find what the problem is?

    Read the article

  • nginx connection time issue on some IPs

    - by sheldon
    I have recently shifted my server to nginx and php-fpm getting rid of apache. This has helped improves speeds of my website. Everything seems to work fine until i came across this issue, i noticed that nginx keeps throwing connection time out errors for only certain IPs. One of the IPs is my office IP, we have a backend that is accessed from our office through out the day. I use supervisord to launch 3 php-fpm processes with workers this is my typical php-fpm config pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.max_requests = 300 Since i have a server with 4 cores and 2 GB ram this is my nginx setup worker_processes 4; worker_rlimit_nofile 8192; events { worker_connections 1024; use epoll; multi_accept off; } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 55; recursive_error_pages on; server_name_in_redirect off; server_tokens off; client_header_timeout 3m; client_body_timeout 3m; send_timeout 3m; connection_pool_size 256; client_header_buffer_size 8k; large_client_header_buffers 4 32k; request_pool_size 4k; output_buffers 4 32k; postpone_output 1460; proxy_buffer_size 32k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; fastcgi_connect_timeout 120; fastcgi_send_timeout 120; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; Where am i going wrong with the config, I have tried various settings but the issue still persists. These are the errors i keep getting 2011/11/13 18:20:33 [error] 21583#0: *311683 upstream timed out (110: Connection timed out) while reading response header from upstream, client: IP, server: tastykhana.in, request: "GET url HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.socket:", host: "tastykhana.in", referrer: "url"

    Read the article

  • how can i use apache log files to recreate usage scenario

    - by daigorocub
    Recently i installed a website that had too many requests and it was too slow. Many improvements have been made to the web site code and we've also bought a new server. I want to test the new server with exactly the same requests that made the old server slow. After that, i will double the requests, make new tests and so on. These requests are logged in the apache log files. So, I can parse those files and make some kind of script to make the same requests. Of course, in this case, the requests will be made only by my computer against the server, but hey, better than nothing. Questions: - is there some app that does this already? - would you use wget? ab? python script? Thanks!

    Read the article

  • SSH command from PHP script - nothing, yet work at cmd line

    - by waxical
    I'm working on an EC2 box and trying to SSH command another box. The command works in command-line, even php -a interactive. However it does not work when running as apache. Example cmd:- system('ssh -i /home/me/keys/key.pem [email protected] "ls"'); I've tried adding apache to wheel group, and gshadow on both boxes. I've also just tried chowning the pem file to apache. Nothing. Yet the command response fine in the two other use cases outlines. What's going on here? Anyone know?

    Read the article

  • XCA: sign IPsec certificates with own CA

    - by sbrattla
    I'm trying to establish a LAN to LAN connection through a VPN tunnel. There's a Zywall at the remote office which will be responsible for establishing a connection to a Draytek at the main office. I'm able to establish the connection if I use shared keys, but I'd like to use certificates instead. I've downloaded the XCA application for Ubuntu which allows me to first create a CA certificate, and then sign "certificate signing reqests" using this CA. However, I'm uncertain if I am doing things right. More specifically<, which basic keys/extended keys should the CA certificate and the certificates themselves have? Right now I just skip selecting any keys at all, but is that right? All hints and help appreciated!

    Read the article

  • Mysqld increases the load on the CPU and drops after flush-tables

    - by mirage
    Help please advice on the issue. Normal load on the cpu 20-30% us + sy. After restoring the database files from the slave server (same version) began a periodic problem. mysql starts to load the cpu at 100% (us + sy grows proportionally). The queue is growing, everything slows down. But with mysqladmin flush-tables are normalized for a few hours. Dedicated linux server running mysql 2 x E5506 24Gb RAM, database size 50Gb. [OK] Currently running supported MySQL version 5.0.51a-24 + lenny4-log [OK] Operating on 64-bit architecture -------- Storage Engine Statistics --------------------------------------- ---- [-] Status: + Archive-BDB-Federated + InnoDB-ISAM-NDBCluster [-] Data in MyISAM tables: 33G (Tables: 1474) [-] Data in InnoDB tables: 1G (Tables: 4) [-] Data in MEMORY tables: 120K (Tables: 3) [-] Reads / Writes: 91% / 9% [-] Total buffers: 12.8M per thread and 7.1G global [OK] Maximum possible memory usage: 15.8G (66% of installed RAM) 4000 - 5500 rps key_buffer = 1536M max_allowed_packet = 2M table_cache = 4096 sort_buffer_size = 409584 read_buffer_size = 128K read_rnd_buffer_size = 8M myisam_sort_buffer_size = 64M thread_cache_size = 500 query_cache_size = 100M thread_concurrency = 24 max_connections = 700 tmp_table_size = 4096M join_buffer_size = 4M max_heap_table_size = 4096M query_cache_limit = 1M low_priority_updates = 1 concurrent_insert = 2 wait_timeout = 30 server-id = 1 log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M innodb_buffer_pool_size = 1536M innodb_log_buffer_size = 4M innodb_flush_log_at_trx_commit = 2 How to solve the problem?

    Read the article

  • How can I disable logging in Tomcat 7?

    - by WilliamMayor
    I have a Tomcat 7 server running in a VM that has very little disk space (20G). Over the course of a few days Tomcat will fill the space with logging info (usually about 15G before it runs out). I've tried turning down the log level (from INFO to SEVERE) in the logging.properties file, I've also tried sending the log info to /dev/null. It doesn't seem to work as I still get a full log directory after no time at all. Can I put a file size limit on the log files? Is something overriding the properties I'm setting? Where can I find this information? My Google Fu just returns information about logging from within an application using JULI.

    Read the article

  • Why do I have no TTY on a basic Ubuntu 9.10 server install?

    - by pr1001
    I have reinstalled Ubuntu 9.10 Server several times on a bog standard 1RU server and each time I finish the install and reboot I see GRUB run and am then presented with a black screen. The machine is running just fine, as I am able to SSH in, but I can't see anything on the attached monitor. I have a simple LCD screen connected via VGA and a signal is apparently being output to it, as it doesn't go asleep. Looking at /var/log/syslog I see: Mar 24 14:57:44 bridge5 rsyslogd-2039: Could no open output file '/dev/xconsole' [try http://www.rsyslog.com/e/2039 ] However, I later see: Mar 24 14:57:44 bridge5 kernel: [ 0.001368] console [tty0] enabled Any thoughts? Thanks!

    Read the article

  • ArcServer creates additional jobs

    - by wullxz
    We're running CA ARCserve Backup r12.5 (Build 5854) - Small Business Server Edition on our Small Business Server 2008. There is a daily job which saves the systemdrive, exchange and 3 databases to our backup storage. It seems like this daily job creates these additional jobs every time it runs. I don't know why it's doing this. Can anybody tell me why this is happening? (I'm sorry this screenshot is in german... "Ergänzungsjob" means something like "extensionjob" or "additionjob")

    Read the article

  • Can mod_fcgid maintain a hard-minimum number of available appserver processes?

    - by user9795
    ...and if so, how? I'm using Apache2 + mod_fcgid to serve a perl Catalyst application, on a box that I own, and I'd like for mod_fcgid to maintain a minimum number of spun-up processes ready to go. The docs say that FcgidMinProcessesPerClass only enforces a minimum number of processes that will be retained in a process class after finishing requests How do I get apache to start up with a certain number of appserver subprocesses on an idle server without using artificial load to get there?

    Read the article

  • sysctl.conf not running on boot

    - by Brian
    At what point is sysctl.conf supposed to be read during boot, and why might it not be running? I have the following settings which are not being applied when I reboot: net.bridge.bridge-nf-call-arptables = 0 net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-filter-pppoe-tagged = 0 net.bridge.bridge-nf-filter-vlan-tagged = 0 fs.nfs.nlm_udpport = 32768 fs.nfs.nlm_tcpport = 32768 The first section is needed for KVM bridging, and the second is to run the NFS lock manager on a known port. However, after booting, these values have not taken effect. If I run sysctl -p, then they do. This wouldn't be a huge issue, except that I can't figure out how to restart the lock manager without rebooting. I would really like to know why sysctl.conf isn't working at boot, but I'd settle for just being able to restart the lock manager. This is on Ubuntu server 10.04.2, kernel 2.6.32-31-server. I know some daemons check the permissions on their config files and refuse to work if they're too permissive, but sysctl.conf is 644 root:root, which I'm pretty sure is the default.

    Read the article

  • SSL certificates with password encrypted key at hosting provider

    - by Jurian Sluiman
    We are a software company and offer hosting to our clients. We have a VPS at a large Dutch datacenter. For some of the applications, we need an SSL certificate which we'd like to encrypt with a password protected keyfile. Our VPS reboots now and then because of updates whatsoever, but that means our apache doesn't start right away because the passwords are needed. This results in downtime and is of course a real big problem. We can give the passwords to our VPS datacenter, or create certificates based on keyfiles without passwords. Both solutions seem not the best one, because they compromise the security of our certificates. What's the best solution for this issue?

    Read the article

  • "Brute force attempt" on sending multiple emails

    - by bretddog
    While testing to send multiple emails, I successfully sent about 100 emails (with a 20KB pdf attachment), to the same email-address (my own), and they were all received. But on next attempt, my cPanel account was blocked, due to a "brute force attempt". Are there any special precautions I need to take when sending bulk emails? I simply looped through below code without pause for each email. What type of alert could that give on the email server, and how should I avoid it? client = New SmtpClient(smtp, Convert.ToInt32(port)) AddHandler client.SendCompleted, AddressOf OnAsyncSendComplete client.Credentials = New System.Net.NetworkCredential(usn, psw) client.SendAsync(mail, token) Should I wait for SendComplete event for each email before sending the next?

    Read the article

  • I can access \\server via explorer but a program wont

    - by Michael Savage
    From ServerA I can access \\ServerB\Telephony\Files\abcdefgh.pdf using windows explorer. From the same ServerA when I try to access the same file on ServerB using a program (a program that imports files from csv file) I get "File Not Found" error. On \\ServerB\Telephony\ the Share is on and I added the service account that I used to log in to ServerA. I am clueless. Please suggest. (oh, it's a Windows 2008 R2 Server) (btw, I did try IP Address, FQDN but works with Explorer but CVS Importer wont read the path. At one time, I did get Access Denied but I dont get access denied anymore after adding the service account to the share. firewalls are off on the servers) Update: I go to My Computer Network I see many servers but ServerB is not in the list..

    Read the article

  • Linux Kernel crash mutex_lock_slowpath "blocked for more than 120 seconds". What to do?

    - by Roddick
    I have out-of-the box Debian Lenny with non-custom kernel 2.6.26-2-amd64. Brand new server that is used to 5% of it's potential, CPU and Disk-wise. Meaning it probably not crashing because of overload. every few days it freezes with hundreds of these messages in console log: : [284847.828428] INFO: task apache2:12473 blocked for more than 120 seconds. : [284847.868468] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [284847.912759] apache2 D ffff8101bc6b7ab0 0 12473 14358 : [284847.912763] ffff810160d5bc50 0000000000000082 ffff8101c0002e40 0000000000000000 : [284847.912766] ffff8101a7c42950 ffff810327d92810 ffff8101a7c42bd8 0000000400000044 : [284847.912770] ffff8101c0002e40 00000000000612d0 0000000000000000 00000040000612d0 : [284847.912773] Call Trace: : [284847.912786] [<ffffffff80429b0d>] __mutex_lock_slowpath+0x64/0x9b : [284847.912790] [<ffffffff80429972>] mutex_lock+0xa/0xb : [284847.912794] [<ffffffff802a20b9>] do_lookup+0x82/0x1c1 : [284847.912800] [<ffffffff802a4271>] __link_path_walk+0x87a/0xd19 : [284847.912805] [<ffffffff80295844>] kmem_getpages+0x96/0x15f : [284847.912808] [<ffffffff80295fb7>] ____cache_alloc_node+0x6d/0x106 : [284847.912814] [<ffffffff802a4756>] path_walk+0x46/0x8b : [284847.912819] [<ffffffff802a4a82>] do_path_lookup+0x158/0x1cf : [284847.912822] [<ffffffff802a3879>] getname+0x140/0x1a7 : [284847.912827] [<ffffffff802a53f1>] __user_walk_fd+0x37/0x4c : [284847.912831] [<ffffffff8029e381>] vfs_lstat_fd+0x18/0x47 : [284847.912840] [<ffffffff8029e3c9>] sys_newlstat+0x19/0x31 : [284847.912848] [<ffffffff8020beda>] system_call_after_swapgs+0x8a/0x8f Almost all traces has __mutex_lock_slowpath as top-level. Only some has different trace: : [284847.737386] INFO: task apache2:12472 blocked for more than 120 seconds. : [284847.777551] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [284847.824881] apache2 D ffff8101bc6b7ab0 0 12472 14358 : [284847.824886] ffff8101b9cc1c50 0000000000000086 ffffffffa0131e0a 0000000000000002 : [284847.824889] ffff8102e7454300 ffff810324c6cad0 ffff8102e7454588 0000000000000000 : [284847.824893] 0000000000000001 0000000000000296 0000000000000003 ffff8101b9cc1c58 : [284847.824896] Call Trace: : [284847.828403] [<ffffffffa0131e0a>] :ext3:__ext3_journal_dirty_metadata+0x1e/0x46 : [284847.828412] [<ffffffff80429b0d>] __mutex_lock_slowpath+0x64/0x9b : [284847.828418] [<ffffffff80429972>] mutex_lock+0xa/0xb : [284847.828421] [<ffffffff802a20b9>] do_lookup+0x82/0x1c1 : [284847.828427] [<ffffffff802a4271>] __link_path_walk+0x87a/0xd19 : [284847.828428] [<ffffffff80271296>] find_lock_page+0x1f/0x8a : [284847.828428] [<ffffffff80273182>] filemap_fault+0x1c2/0x33c : [284847.828428] [<ffffffff802a4756>] path_walk+0x46/0x8b : [284847.828428] [<ffffffff802a4a82>] do_path_lookup+0x158/0x1cf : [284847.828428] [<ffffffff802a3879>] getname+0x140/0x1a7 : [284847.828428] [<ffffffff802a53f1>] __user_walk_fd+0x37/0x4c : [284847.828428] [<ffffffff8029e381>] vfs_lstat_fd+0x18/0x47 : [284847.828428] [<ffffffff8029e3c9>] sys_newlstat+0x19/0x31 : [284847.828428] [<ffffffff8020beda>] system_call_after_swapgs+0x8a/0x8f kernel: [1912668.466347] INFO: task apache2:17984 blocked for more than 120 seconds. [1912668.507035] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. : [1912668.555165] apache2 D ffff8101c5637ba0 0 17984 17282 : [1912668.596752] ffff810166a7dd30 0000000000000086 0000000000000000 ffff810166a7dcd8 : [1912668.643341] ffff8101c563c880 ffff81024505f000 0000000000000002 ffff810166a7dd68 : [1912668.699566] 0000000000000086 00000000000cb1a0 0000000000000000 ffff81017f344d60 : [1912668.744773] Call Trace: : [1912668.761754] [<ffffffff8022a3ed>] pick_next_task_fair+0x6e/0x7a : [1912668.829311] [<ffffffff802be0e2>] bio_alloc_bioset+0x89/0xd9 : [1912668.861930] [<ffffffff8024ac3a>] getnstimeofday+0x39/0x98 : [1912668.897005] [<ffffffff802710f6>] sync_page+0x0/0x41 : [1912668.927868] [<ffffffff80429487>] io_schedule+0x5c/0x9e : [1912668.960286] [<ffffffff80271132>] sync_page+0x3c/0x41 : [1912668.991756] [<ffffffff804295fa>] __wait_on_bit_lock+0x36/0x66 : [1912669.031757] [<ffffffff802710e3>] __lock_page+0x5e/0x64 : [1912669.064191] [<ffffffff802461d3>] wake_bit_function+0x0/0x23 : [1912669.100100] [<ffffffff80281bc5>] handle_mm_fault+0x5e4/0x8de : [1912669.134531] [<ffffffff802461a5>] autoremove_wake_function+0x0/0x2e : [1912669.174623] [<ffffffff802aa108>] fcntl_setlk+0x1cf/0x291 : [1912669.210623] [<ffffffff802461a5>] autoremove_wake_function+0x0/0x2e : [1912669.246923] [<ffffffff802a677f>] sys_fcntl+0x280/0x2f7 After googling for "mutex_lock_slowpath" I can only find the Kernel mailing list discussions that this issue was introduced in some commit. Wthout reference to verison. Discussions as recent as Jan 25, 2011. The Kernel I am using is form Debian Lenny, year ago. What should I do? Is this bug even fixed in kernel? if it's such obvious bug why it happens so rarely? Should I download latest kernel from kernel.org and upgrade? Should I use Debian backports to install new "Approved" kernel? Am I missing something? What to do?

    Read the article

  • PHP fopen fails - does not have permission to open file in write mode

    - by George
    I have an Apache 2.17 server running on a Fedora 13. I want to be able to create a file in a directory. I cannot do that. Whenever I try to open a file with php for writing fopen(,'w'), it tells me that I don't have permission to do that. So i checked the httpd.conf file in /etc/httpd/conf/. It says user apache, group apache. So I changed ownership (chown -R apache:apache .*) of my whole /www directory to apache:apache. I also run chmod -R 777 * Apart from knowing how terribly dangerous this is, it actually still gives me the same error, even though I even allow public write!

    Read the article

  • Missing Setup->VLAN on DD-WRT Web GUI?

    - by leeand00
    Please note that I've already posted this question on the dd-wrt forums. I am using "DD-WRT v24-sp2 (08/07/10) std" on a WZR-HP-G300NH model Buffalo router. I've found a number of tutorials online that talk about using Setup-VLAN on the menu system of the GUI interface to configure VLANs, but on my own router, it appears that VLAN configuration is located elsewhere; mainly in the Setup-Networking-VLAN Tagging section of the GUI. I would gladly just use the bash shell to configure the vlans on the router, but every tutorial I read refers to "changing the gui to reflect the changes made in the bash prompt". Are there any tutorials or documentation that you are aware of that that refer to the Setup-Networking-VLAN Tagging GUI portion of my router?

    Read the article

  • "blue screen" occurs when using VPN

    - by Milla Well
    I am using the VPN client "Tunnelblick" for getting access to the restricted sites of my university. This is working quite fine, until sometime out of a sudden a "blue screen" occurs which is telling me to restart the computer. I am using Mac OS X Lion on a 4,1 MacBook Air. The error report tells me, that the system crashed somewhere during processing the ipconfig thread. Is someone known to this or similar problems with Mac OS X Lion and VPN and has kind of a solution?

    Read the article

  • Xen DomU does not have network connectivity

    - by Prakashkumar Thiagarajan
    I am trying to install Xen on my Fedora box. Dom0 image has network connectivity. But when I try to create a DomU, it does not have network connectivity. I want to be able to run in bridged mode. I have the /etc/xend/xend-config.sxp file accordingly. My config file looks like kernel = "/boot/vmlinuz-2.6.18-xenU" memory = 64 name = "clientA" vif = ['bridge=xenbr0,mac=12.34.56.78.9A.BC'] root = "/dev/sda1 ro" ramdisk = "/boot/initrd-linux.img" extra = "ro selinux=0.3 initcall_debug" features = 'auto_translated_physmap' Am I missing something ?

    Read the article

  • Image an entire computer

    - by DalexL
    I want to start doing a computer repair. I know that some issues may arise (what if I do something wrong and ruin the entire computer?). I'm not saying I'm not confident in my work, but accidents do happen sometimes. Now, the best solution this problem would be to simply just backup their entire computer and restore it if anything goes wrong. If I have an external HD, are there any available methods to create an entire image for the computer so no matter what I do, I can just reload it's last state?

    Read the article

  • What is the best VM for developing WPF apps from within OS X?

    - by MarqueIV
    All of my machines are Macs (Mac Pro, MacBook Pro, MacBook Air and Mac Mini (and Apple TV 2.0 too! :) ) but for my day-job, I develop .NET/WPF applications. Normally I just boot into Boot Camp and develop that way, which of course works great, but there are times when I need to simultaneously get to things on my Mac-side of the equation, so I've bought both VMware 3.1 and Parallels 6. Both work, however, even on my Mac Pro where I paid to upgrade to the better video cards (the NVidia 8600s I think vs. the stock ATI cards) the WPF performance bites!! Now this confuses me since both boast that they support not only hardware-accelerated OpenGL 2.1, but also hardware-accelerated DirectX 9 (VMware even allegedly supports DirectX 10!) via their respective virtual drivers and both can run 3D games just fine, even in a window. But even the simple act of resizing a WPF window that has a tiled background results in some HIDEOUS repainting and resizing behaviors. It's damn near closer to what you'd expect over RDP let alone a software-only renderer (forget accelerated hardware completely!) So... can anyone please tell me WTF WPF is doing differently? More importantly, how can I speed up the WPF performance? Should I switch to VirtualBox that also has support for DirectX? Or am I just gonna have to 'byte' the bullet (sorry... had to. So I like puns! Thank Jon Stewart!) and continue using Boot Camp?

    Read the article

  • Persistent static routes fail on MacOS 10.6.5 startup!

    - by verbalicious
    I'm unable to get static routes to persist a reboot on Mac OS 10.6.5. I've tried all of the methods prescribed in Google search results, and previous posts on this site. I've tried manually creating a launchd daemon, and used RouteSplit's launchd daemon to no avail. It's clear that the interface is not ready when these methods attempt to apply the route. This workstation in question is getting its IP from DHCP and probably hasn't gotten its DHCP lease when the command runs. We're able to apply the route by hand when logged in, but not through startup methods. Is there another way to apply this route by sneaking the command into something later, but before the login window appears to the user? Here is some relevant log info from system.log. You can see the "route: writing to routing socket: Network is unreachable" errors where my launchd script fires off. I've tried adding extra "sleep" and "ipconfig waitall" statements later in the script but this doesn't fly. Dec 15 19:30:41 localhost com.apple.launchd[1]: *** launchd[1] has started up. *** Dec 15 19:30:45 localhost mDNSResponder[18]: mDNSResponder mDNSResponder-258.13 (Oct 8 2010 17:10:30) starting Dec 15 19:30:47 localhost configd[15]: bootp_session_transmit: bpf_write(en1) failed: Network is down (50) Dec 15 19:30:47 localhost configd[15]: DHCP en1: INIT transmit failed Dec 15 19:30:47 localhost configd[15]: network configuration changed. Dec 15 19:30:47 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:30:47 Administrators-MacBook-Pro blued[16]: Apple Bluetooth daemon started Dec 15 19:30:52 Administrators-MacBook-Pro syslog[67]: routes.sh: Starting RouteSplit Dec 15 19:30:53 Administrators-MacBook-Pro com.apple.usbmuxd[41]: usbmuxd-207 built for iTunesTenOne on Oct 19 2010 at 13:50:35, running 64 bit Dec 15 19:30:54 Administrators-MacBook-Pro /System/Library/CoreServices/loginwindow.app/Contents/MacOS/loginwindow[50]: Login Window Application Started Dec 15 19:30:55 Administrators-MacBook-Pro bootlog[61]: BOOT_TIME: 1292459441 0 Dec 15 19:30:55 Administrators-MacBook-Pro syslog[86]: routes.sh: static route 192.168.0.0/23 192.168.2.2 Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: route: writing to routing socket: Network is unreachable Dec 15 19:30:55 Administrators-MacBook-Pro net.routes.static[65]: add net 192.168.0.0: gateway 192.168.2.2: Network is unreachable Dec 15 19:30:57 Administrators-MacBook-Pro org.apache.httpd[38]: httpd: Could not reliably determine the server's fully qualified domain name, using Administrators-MacBook-Pro.local for ServerName Dec 15 19:30:58 Administrators-MacBook-Pro loginwindow[50]: Login Window Started Security Agent Dec 15 19:30:58 Administrators-MacBook-Pro WindowServer[89]: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:30:58 Administrators-MacBook-Pro com.apple.WindowServer[89]: Wed Dec 15 19:30:58 Administrators-MacBook-Pro.local WindowServer[89] <Error>: kCGErrorFailure: Set a breakpoint @ CGErrorBreakpoint() to catch errors as they are logged. Dec 15 19:31:18 Administrators-MacBook-Pro configd[15]: network configuration changed. Dec 15 19:31:19 administrators-macbook-pro configd[15]: setting hostname to "administrators-macbook-pro.local" Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[121]: /usr/libexec/ntpd-wrapper: scutil key State:/Network/Global/DNS not present after 30 seconds Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp options: a=2 v=1 e=0.100 E=5.000 P=2147483647.000 Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: d=15 c=5 x=0 op=1 l=/var/run/sntp.pid f= time.apple.com Dec 15 19:31:25 administrators-macbook-pro _mdnsresponder[124]: sntp: getaddrinfo(hostname, ntp) failed with nodename nor servname provided, or not known Dec 15 19:31:27 administrators-macbook-pro configd[15]: network configuration changed. Dec 15 19:31:27 Administrators-MacBook-Pro configd[15]: setting hostname to "Administrators-MacBook-Pro.local" Dec 15 19:31:27 Administrators-MacBook-Pro ntpd[37]: Cannot find existing interface for address 17.151.16.20 Dec 15 19:31:27 Administrators-MacBook-Pro ntpd_initres[125]: ntpd indicates no data available! Dec 15 19:31:31 Administrators-MacBook-Pro sshd[128]: USER_PROCESS: 133 ttys000 Dec 15 19:31:37 Administrators-MacBook-Pro sudo[138]: administrator : TTY=ttys000 ; PWD=/Users/administrator ; USER=root ; COMMAND=/usr/bin/less /var/log/system.log ``You can see the following line in /var/log/kernel.log that shows the en0 interface coming up: Dec 15 19:30:51 Administrators-MacBook-Pro kernel[0]: Ethernet [AppleBCM5701Ethernet]: Link up on en0, 1-Gigabit, Full-duplex, No flow-control, Debug [796d,0f01,0de1,0300,c1e1,3800]

    Read the article

  • install filezilla error, Depends: libatk1.0-0 (>= 1.29.3) but 1.28.0-0ubuntu1 is to be installed

    - by solomongaby
    I am trying to install filezilla from this repo: https://launchpad.net/~yofel/+archive/ppa and after sudo apt-get update i tried to install it but i get the error: Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: filezilla: Depends: libatk1.0-0 (>= 1.29.3) but 1.28.0-0ubuntu1 is to be installed Do you have any ideea what is happening ?

    Read the article

  • Can I swap a HDD from a RAID 1 for a bigger capacity drive?

    - by Gnuffo1
    I have a RAID 1 setup with two 500GB drives. So they mirror each other. I know that if I take one of them out and put in a new 500GB drive it will set it up so that the new drive will mirror the old one. My question is if I take out one of the 500GB drives and add say a 2TB drive, will it set up the 2TB drive to mirror the 500GB? I assume if it does, it will only allow me access to 500GB. However, if I then take out the one remaining 500GB after the data has been mirrored and stick in a 2TB, once that is rebuilt, will I then have a 2TB RAID setup with the same data as what was originally on the 2x 500GB setup?

    Read the article

  • Weird execution of ruby/git executables in Windows

    - by Frexuz
    Something strange has happened. I can't run some command line executables in Windows anymore. Steps: Open cmd Run an executable, such as ruby -v or git -h When I do that, a new command prompt opens, running that command (I think, it's too fast to see), and instantly closes again. I've managed to print screen the new command prompt, and it shows that it's running inside this path: C:\Documents and Settings\Administrator\Local Settings\Temp\3582-490 Inside this folder, is the executable I'm tring to run. If I run ruby, then ruby.exe is in there. If I run git, then git.exe is in there. And it's always emptying the folder in between, so there is always just one .exe file

    Read the article

< Previous Page | 5 6 7 8 9 10 11 12  | Next Page >