Search Results

Search found 47251 results on 1891 pages for 'web storage'.

Page 871/1891 | < Previous Page | 867 868 869 870 871 872 873 874 875 876 877 878  | Next Page >

  • Apache configuration to access directory

    - by Felipe Hummel
    I'm on Ubuntu 9.10. My web application is in a directory on my /home/me/app . I want to configure Apache in such a way that I can access my app through a directory. For example: People can access my machine through domain.com. What I would like to do is access my web application (located at /home/me/app) through a directory, using something like: domain.com/myapp. How can I set up the apache configuration for this kind of behavior? Of course, I do not want to move all my application to /var/www/myapp. Thanks

    Read the article

  • Apache configuration to access directory

    - by Felipe Hummel
    I'm on Ubuntu 9.10. My web application is in a directory on my /home/me/app . I want to configure Apache in such a way that I can access my app through a directory. For example: People can access my machine through domain.com. What I would like to do is access my web application (located at /home/me/app) through a directory, using something like: domain.com/myapp. How can I set up the apache configuration for this kind of behavior? Of course, I do not want to move all my application to /var/www/myapp. Thanks

    Read the article

  • Apache configuration to access for directory

    - by Felipe Hummel
    I'm on Ubuntu 9.10. My web application is in a directory on my /home/me/app . I want to configure Apache in such a way that I can access my app through a directory. For example: People can access my machine through domain.com. What I would like to do is access my web application (located at /home/me/app) through a directory, using something like: domain.com/myapp. How can I set up the apache configuration for this kind of behavior? Of course, I do not want to move all my application to /var/www/myapp. Thanks

    Read the article

  • RHEL 6.x on Rackspace Cloud and Dedicated hardware experiencing Redis Timeouts

    - by zhallett
    I just recently set up a mixture of RHEL 6.1 Rackspace cloud hosts and RHEL 6.2 dedicated hosts using Rackconnect. I am experiencing intermittent Redis timeouts from within our Rails 3.2.8 app with Redis 2.4.16 running on the RHEL 6.2 dedicated hosts. There is no network latency or packet loss. Also there are no errors on any interfaces on our cloud or dedicated servers or on the managed firewall from Rackspace. When Redis timesout, there is nothing logged within redis even though it is set up to do debug logging. The only error we receive is from Airbrake saying there was a Redis timeout. Network topology: RHEL 6.1 cloud hosts <--> Alert logic IDS <--> Cisco ASA 5510 <--> RHEL 6.2 dedicated hosts (web nodes) (two way NAT) (db hosts running redis) Ping from db host to web host: 64 bytes from 10.181.230.180: icmp_seq=998 ttl=64 time=0.520 ms 64 bytes from 10.181.230.180: icmp_seq=999 ttl=64 time=0.579 ms 64 bytes from 10.181.230.180: icmp_seq=1000 ttl=64 time=0.482 ms --- web1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999007ms rtt min/avg/max/mdev = 0.359/0.535/5.684/0.200 ms Ping from web host to db host: 64 bytes from 192.168.100.26: icmp_seq=998 ttl=64 time=0.544 ms 64 bytes from 192.168.100.26: icmp_seq=999 ttl=64 time=0.452 ms 64 bytes from 192.168.100.26: icmp_seq=1000 ttl=64 time=0.529 ms --- data1.xxxxxx.com ping statistics --- 1000 packets transmitted, 1000 received, 0% packet loss, time 999017ms rtt min/avg/max/mdev = 0.358/0.499/6.120/0.201 ms Redis config: daemonize yes pidfile /var/run/redis/6379/redis_6379.pid port 6379 timeout 0 loglevel debug logfile /var/lib/redis/log syslog-enabled yes syslog-ident redis-6379 syslog-facility local0 databases 16 save 900 1 save 300 10 save 60 10000 rdbcompression yes dbfilename dump-6379.rdb dir /var/lib/redis maxclients 10000 maxmemory-policy volatile-lru maxmemory-samples 3 appendfilename appendonly-6379.aof appendfsync everysec no-appendfsync-on-rewrite no auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb slowlog-log-slower-than 10000 slowlog-max-len 1024 vm-enabled no vm-swap-file /tmp/redis.swap vm-max-memory 0 vm-page-size 32 vm-pages 134217728 vm-max-threads 4 hash-max-zipmap-entries 512 hash-max-zipmap-value 64 list-max-ziplist-entries 512 list-max-ziplist-value 64 set-max-intset-entries 512 zset-max-ziplist-entries 128 zset-max-ziplist-value 64 activerehashing yes Redis-cli info: redis-cli info redis_version:2.4.16 redis_git_sha1:00000000 redis_git_dirty:0 arch_bits:64 multiplexing_api:epoll gcc_version:4.4.6 process_id:4174 uptime_in_seconds:79346 uptime_in_days:0 lru_clock:1064644 used_cpu_sys:13.08 used_cpu_user:19.81 used_cpu_sys_children:1.56 used_cpu_user_children:7.69 connected_clients:167 connected_slaves:0 client_longest_output_list:0 client_biggest_input_buf:0 blocked_clients:6 used_memory:15060312 used_memory_human:14.36M used_memory_rss:22061056 used_memory_peak:15265928 used_memory_peak_human:14.56M mem_fragmentation_ratio:1.46 mem_allocator:jemalloc-3.0.0 loading:0 aof_enabled:0 changes_since_last_save:166 bgsave_in_progress:0 last_save_time:1352823542 bgrewriteaof_in_progress:0 total_connections_received:286 total_commands_processed:507254 expired_keys:0 evicted_keys:0 keyspace_hits:1509 keyspace_misses:65167 pubsub_channels:0 pubsub_patterns:0 latest_fork_usec:690 vm_enabled:0 role:master db0:keys=6,expires=0 edit 1: add redis-cli info output

    Read the article

  • Program to download .torrent file from a Magnet URI?

    - by robmathers
    I have a Torrent app running on my home server that has an outdated but pretty useful web interface. The one problem is that it doesn't take magnet links, only .torrent files. I'd like to continue using this and not have to bother with finding a different program. Given that magnet links are just a pointer to download the torrent file from the swarm, I'm hoping there's a program out there that will take a magnet link and spit out a .torrent only. I know I could put it into µTorrent and grab the file from the app's directory, but that's a bit roundabout, and I'd like something that will do it semi-unattented. Preferably for OS X, but Linux (or a web app) would work too.

    Read the article

  • Can't sync filesystem without reboot

    - by Fabio
    I'm having an issue with a linux server. Once a week the running mysql instance hangs and there is no way to fully stop it. If I kill it, it remains in zombie status and init does not reap its pid. The server is used for staging deployments and some internal tools, so it's not under heavy load. The only process constantly used id mysql and for this I think that it's the only process which suffer of this issue. I've searched system logs for errors and the only thing I found is this error (repeated a couple of times) in dmesg output: [706560.640085] INFO: task mysqld:31965 blocked for more than 120 seconds. [706560.640198] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [706560.640312] mysqld D ffff88032fd93f40 0 31965 1 0x00000000 [706560.640317] ffff880242a27d18 0000000000000086 ffff88031a50dd00 ffff880242a27fd8 [706560.640321] ffff880242a27fd8 ffff880242a27fd8 ffff88031e549740 ffff88031a50dd00 [706560.640325] ffff88031a50dd00 ffff88032fd947f8 0000000000000002 ffffffff8112f250 [706560.640328] Call Trace: [706560.640338] [<ffffffff8112f250>] ? __lock_page+0x70/0x70 [706560.640344] [<ffffffff816cb1b9>] schedule+0x29/0x70 [706560.640347] [<ffffffff816cb28f>] io_schedule+0x8f/0xd0 [706560.640350] [<ffffffff8112f25e>] sleep_on_page+0xe/0x20 [706560.640353] [<ffffffff816c9900>] __wait_on_bit+0x60/0x90 [706560.640356] [<ffffffff8112f390>] wait_on_page_bit+0x80/0x90 [706560.640360] [<ffffffff8107dce0>] ? autoremove_wake_function+0x40/0x40 [706560.640363] [<ffffffff8112f891>] filemap_fdatawait_range+0x101/0x190 [706560.640366] [<ffffffff81130975>] filemap_write_and_wait_range+0x65/0x70 [706560.640371] [<ffffffff8122e441>] ext4_sync_file+0x71/0x320 [706560.640376] [<ffffffff811c3e6d>] do_fsync+0x5d/0x90 [706560.640379] [<ffffffff811c40d0>] sys_fsync+0x10/0x20 [706560.640383] [<ffffffff816d495d>] system_call_fastpath+0x1a/0x1f When this happens the only way to make everything working again is a full reboot, but in order to do that I'm forced to use this command after I've manually stopped all running processes echo b > /proc/sysrq-trigger otherwise normal reboot process hangs forever. I've tracked reboots script and I've found out that also the reboot process hangs on a sync call, this one in /etc/init.d/sendsigs (I'm on ubuntu) # Flush the kernel I/O buffer before we start to kill # processes, to make sure the IO of already stopped services to # not slow down the remaining processes to a point where they # are accidentily killed with SIGKILL because they did not # manage to shut down in time. sync I'm almost sure that the cause of this is an hardware issue (the RAID controller???) also because I've other two machines with the same hardware and software configuration and they don't suffer of this, but I can't find any hint in syslog or dmesg. I've also installed smartmontools and mcelog packages but none of them did report any issue. What can I do to track the cause of this issue? Today is happened again, here is the status of system after triggering a reboot init---console-kit-dae---64*[{console-kit-dae}] +-dbus-daemon +-mcelog +-mysqld---{mysqld} +-newrelic-daemon---newrelic-daemon---11*[{newrelic-daemon}] +-ntpd +-polkitd---{polkitd} +-python3 +-rpc.idmapd +-rpc.statd +-rpcbind +-sh---rc---S20sendsigs---sync +-smartd +-snmpd +-sshd---sshd---zsh---sudo---zsh---pstree +-sshd---sshd---zsh---sudo---zsh And here is the status of sync process # ps aux | grep sync root 3637 0.1 0.0 4352 372 ? D 05:53 0:00 sync i.e. Uninterruptible sleep... Hardware specs as reported by lshw I think the raid controller is a fake raid. I usually don't deal with hardware (and for the record I don't have physical access to it) description: Computer product: X7DBP () vendor: Supermicro version: 0123456789 serial: 0123456789 width: 64 bits capabilities: smbios-2.4 dmi-2.4 vsyscall32 configuration: administrator_password=disabled boot=normal frontpanel_password=unknown keyboard_password=unknown power-on_password=disabled uuid=53D19F64-D663-A017-8922-0030487C1FEE *-core description: Motherboard product: X7DBP vendor: Supermicro physical id: 0 version: PCB Version serial: 0123456789 *-firmware description: BIOS vendor: Phoenix Technologies LTD physical id: 0 version: 6.00 date: 05/29/2007 size: 106KiB capacity: 960KiB capabilities: pci pnp upgrade shadowing escd cdboot bootselect edd int13floppy2880 acpi usb ls120boot zipboot biosbootspecification *-storage description: RAID bus controller product: 631xESB/632xESB SATA RAID Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 09 width: 32 bits clock: 66MHz capabilities: storage pm bus_master cap_list configuration: driver=ahci latency=0 resources: irq:19 ioport:18a0(size=8) ioport:1874(size=4) ioport:1878(size=8) ioport:1870(size=4) ioport:1880(size=32) memory:d8500400-d85007ff

    Read the article

  • How can I create an external SSL wrapper/tunnel page for an insecure webpage behind a firewall?

    - by Ross Rogers
    I have an security cam with a built-in webpage inside my home network. That camera is using basic HTTP authentication instead of SSL. I want to be able to access the camera's webpage from outside my network, but I don't want to open an unencrypted video stream to the outside world. Right now, I'm doing some cumbersome ssh tunneling where I bounce off an ssh server like: ssh -N -L 9090:CAMERA_IP:80 [email protected] and then I connect to my web page like: http://localhost:9090 But this is a pain. Now, gentle reader, I beseech you to tell me how I can use linux (Ubuntu) to get a fully encrypted SSL connection to my internal web page without the hassle of creating an ssh tunnel each time. I believe I can use stunnel, but I'm not sure of the command.

    Read the article

  • rsync creating thousands of ..ds_store files from mounted volume

    - by daniel Crabbe
    I've been using rsync on OS X to sync all our website admins. It was working fine until the OS X 10.6.3 update! Now it creates thousands of empty (0-kb) folders. It only does it when synching to a mounted network drive (which we need to do) as when I sync to my local drive it works as usual! I've tried excludes which don't seem to be working... also tried a different version of rsync so it's an OS X issue. echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up KINEMASTIK" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Users/dan/Dropbox/documents/WORK/kinemastik/WEBSITE/youradmin/ echo "" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" echo " SYNCING up CHRIS BROOKS YOURADMIN" echo "~*~*~*~*~*~*~*~*~*~*~*~*~*" /usr/local/bin/rsync -aNHAXv --progress --exclude-from 'exclude.txt' /Volumes/Groups/Projects/483_Modern_Activity_Website/web/youradmin/ /Volumes/Groups/Projects/516_ChrisBrooks/website/youradmin/ Has anyone experienced the same problem?

    Read the article

  • chrooting user causes "connection closed" message when using sftp

    - by George Reith
    First off I am a linux newbie so please don't assume much knowledge. I am using CentOS 5.8 (final) and using OpenSSH version 5.8p1. I have made a user playwithbits and I am attempting to chroot them to the directory home/nginx/domains/playwithbits/public I am using the following match statement in my sshd_config file: Match group web-root-locked ChrootDirectory /home/nginx/domains/%u/public X11Forwarding no AllowTcpForwarding no ForceCommand /usr/libexec/openssh/sftp-server # id playwithbits returns: uid=504(playwithbits) gid=504(playwithbits) groups=504(playwithbits),507(web-root-locked) I have changed the user's home directory to: home/nginx/domains/playwithbits/public Now when I attempt to sftp in with this user I instantly get the message: connection closed Does anyone know what I am doing wrong? Edit: Following advice from @Dennis Williamson I have connected in debug mode (I think... correct me if I'm wrong). I have made a bit of progress by using chmod to set permissions recursively of all files in the directly to 700. Now I get the following messages when I attempt to log on (still connection refused): Connection from [My ip address] port 38737 debug1: Client protocol version 2.0; client software version OpenSSH_5.6 debug1: match: OpenSSH_5.6 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_5.8 debug1: permanently_set_uid: 74/74 debug1: list_hostkey_types: ssh-rsa,ssh-dss debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: client->server aes128-ctr hmac-md5 none debug1: kex: server->client aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST received debug1: SSH2_MSG_KEX_DH_GEX_GROUP sent debug1: expecting SSH2_MSG_KEX_DH_GEX_INIT debug1: SSH2_MSG_KEX_DH_GEX_REPLY sent debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: KEX done debug1: userauth-request for user playwithbits service ssh-connection method none debug1: attempt 0 failures 0 debug1: user playwithbits matched group list web-root-locked at line 91 debug1: PAM: initializing for "playwithbits" debug1: PAM: setting PAM_RHOST to [My host info] debug1: PAM: setting PAM_TTY to "ssh" debug1: userauth-request for user playwithbits service ssh-connection method password debug1: attempt 1 failures 0 debug1: PAM: password authentication accepted for playwithbits debug1: do_pam_account: called Accepted password for playwithbits from [My ip address] port 38737 ssh2 debug1: monitor_child_preauth: playwithbits has been authenticated by privileged process debug1: SELinux support disabled debug1: PAM: establishing credentials User child is on pid 3942 debug1: PAM: establishing credentials Changed root directory to "/home/nginx/domains/playwithbits/public" debug1: permanently_set_uid: 504/504 debug1: Entering interactive session for SSH2. debug1: server_init_dispatch_20 debug1: server_input_channel_open: ctype session rchan 0 win 2097152 max 32768 debug1: input_session_request debug1: channel 0: new [server-session] debug1: session_new: session 0 debug1: session_open: channel 0 debug1: session_open: session 0: link with channel 0 debug1: server_input_channel_open: confirm session debug1: server_input_global_request: rtype [email protected] want_reply 0 debug1: server_input_channel_req: channel 0 request env reply 0 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req env debug1: server_input_channel_req: channel 0 request subsystem reply 1 debug1: session_by_channel: session 0 channel 0 debug1: session_input_channel_req: session 0 req subsystem subsystem request for sftp by user playwithbits debug1: subsystem: cannot stat /usr/libexec/openssh/sftp-server: Permission denied debug1: subsystem: exec() /usr/libexec/openssh/sftp-server debug1: Forced command (config) '/usr/libexec/openssh/sftp-server' debug1: session_new: session 0 debug1: Received SIGCHLD. debug1: session_by_pid: pid 3943 debug1: session_exit_message: session 0 channel 0 pid 3943 debug1: session_exit_message: release channel 0 debug1: session_by_channel: session 0 channel 0 debug1: session_close_by_channel: channel 0 child 0 debug1: session_close: session 0 pid 0 debug1: channel 0: free: server-session, nchannels 1 Received disconnect from [My ip address]: 11: disconnected by user debug1: do_cleanup debug1: do_cleanup debug1: PAM: cleanup debug1: PAM: closing session debug1: PAM: deleting credentials

    Read the article

  • How can I secure Postgres for remote access when not in a private network?

    - by orokusaki
    I have a database server on a VMWare VM (Ubuntu 12.04.1 LTS server), and it just occurred to me that the server is accessible via the web, since the same physical server contains a VM that hosts public websites. My iptables in the database are such that only SSH traffic, loopback traffic, and TCP on port 5432 are allowed. I will only allow host access to the Postgres server from the IP of the other VM on the same physical machine. Does this seem sufficient for security, assuming there aren't gaping holes in my general OS configuration, or is Postgres one of those services that should never be web facing, (assuming there are some of "those"). Will I need to use hostssl instead of host in my pg_hba.conf, even though the data will travel only on my own network, presumably?

    Read the article

  • Windows Server 2003 Synchronize Not Sticking

    - by lkessler
    We have a Windows Server 2003. It had Raid running on 2 disks. One disk failed and the Raid Controller failed. We replaced the disk and controller and restored everything. No data was lost. The users of that server found that there were a number of directories that appeared empty. We found that from their machine, we could right-click on the directory and select "Synchronize" and the files in the directory would now be visible to them. However, when opening Internet Explorer and browsing the web and ftp'ing to a web site, the files in the directory would vanish. We would have to "Synchronize" them again to get them to reappear. What is going on to cause this need to Synchronize and then re-Synchronize again? What do we need to do to fix this so that the directories are permanently visible?

    Read the article

  • Two different sites, same IP, same top-level domain, on IIS 7.5 -- one works and the other displays HTTP 404 error

    - by user717236
    I'm running a Windows 2008 R2 box with IIS 7.5 as the web server. On IIS, I have two websites: mysubsite1.mysite.com and mysubsite2.mysite.com. There is only one IP on the server and both sites share this IP. Here is how I have the bindings configured: mysubsite1.mysite.com works fine. However, mysubsite2.mysite.com gives me the following error: Not Found HTTP Error 404. The requested resource is not found. Now, if I change the Host name field for mysubsite1.mysite.com to blank and restart the web server, both sites work! The question is why is the host name field for the first site causing an HTTP 404 error for the second site when both sites' Host name fields are filled? I would appreciate any insight. Thank you.

    Read the article

  • Kerberos: connection from win app running from IIS to SQL failed

    - by Mikhail Kislitsyn
    I have an IIS web-application with Windows authentication and impersonation. This application connects to SQL server. In this case Kerberos works fine. But there is a problem. Web-application runs windows application (not .NET), which also connects to the SQL server. Windows application runs with IIS app user credentials and impersonates current site user to connect to SQL server. scheme: http://i.stack.imgur.com/2cgv7.png When delegation for IIS user is set to "Trust this computer for delegation to any service" everything works fine. But I can't use this type of delegation according to security requirements. When I set delegation to "Specific services" and choose MSSQLSvc SPN, connection from windows application fails with "ANONIMOUS" fault. WireShark shows "KRB5KDC_ERR_BADOPTION" packet. What I'm doing wrong?

    Read the article

  • Installing Team Foundation Server 2010 with SharePoint Foundation 2010

    - by AKa
    Is it possible to install TFS 2010 with SharePoint Foundation 2010? If yes is there any installation guide? UPDATE (05. February 2010): I found some useful help in Internet. For example this one. The problem is that I can't use the standard port 80 for Web Application because this one is already assigned to my web page. So what is to do to use other port? Can I use other port or should I use bindings? Best Regards Anton Kalcik

    Read the article

  • Bash Script to Compress / Transfer / Remove Log Files

    - by Jason
    I am currently using chronolog to set log file names for Apache with date. They are in the following format: /WEB/LOGS/APACHE_ACCESS_YYYY-MM-DD.log /WEB/LOGS/APACHE_ERROR_YYYY-MM-DD.log I would like to have a script that runs on the first of every month and compresses the log files from the previous month, transfers them to another host (via SCP) and then deletes the compressed file. find . -name '*.log' -mtime +1 -type f I've found several examples like the one above that allow you to select files x days old, but I need all files from the previous month. I am the first to admit my bash scripting skills are weak so would really appreciate any help and guidance.

    Read the article

  • DNS Provider/Domain Registrar

    - by Arcath
    I have a whole bunch of domains with my current web host and when i got the package i got it with a few gig of web space and a bunch of mysql databases but times have changed and now and i don't use the hosting im paying for, and i just my host as a DNS server to forward everything else where. The process of removing the host is going to require me to transfer all the domains to another package etc... which is going to cause disruption so my question is: Who is the best provider for DNS only? I don't want any space or mail just someone to hold the domains and let me set any DNS options I want (A/MX/CNAME records for everything, even possibly the ability to point my domains at my own DNS server).

    Read the article

  • Slow disk transfer rate

    - by Nooklez
    I have problem with slow disk transfer rate. It's static files server for our website. I was making backup of data and noticed that tar is very slow. So I did hdparm -t and... hdparm -t /dev/sda3 /dev/sda3: Timing buffered disk reads: 6 MB in 4.70 seconds = 1.28 MB/sec It's low traffic hour now on our site, so huge I/O traffic is not a reason (iotop show less than 1 MB/s). It's RAID10 setup (2x2 SATA drives). Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy ------------------------------------------------------------------------------ u0 RAID-10 OK - - 64K 1396.96 W ON VPort Status Unit Size Type Phy Encl-Slot Model ------------------------------------------------------------------------------ p0 OK u0 698.63 GB SATA 0 - WDC WD7500AADS-00M2 p1 OK u0 698.63 GB SATA 1 - WDC WD7500AADS-00M2 p2 OK u0 698.63 GB SATA 2 - WDC WD7500AADS-00M2 p3 OK u0 698.63 GB SATA 3 - WDC WD7500AADS-00M2 We have recently changed almost all components of server (excluding 3ware controller + disks). And I think problems started since then. Can it be configuration problem or hardware? EDIT: I found something like that in dmesg [166843.625843] irq 16: nobody cared (try booting with the "irqpoll" option) [166843.625846] Pid: 0, comm: swapper Not tainted 3.1.5-gentoo #3 [166843.625847] Call Trace: [166843.625848] <IRQ> [<ffffffff810859d5>] __report_bad_irq+0x35/0xc1 [166843.625856] [<ffffffff81085cec>] note_interrupt+0x165/0x1e1 [166843.625859] [<ffffffff8108445f>] handle_irq_event_percpu+0x16f/0x187 [166843.625861] [<ffffffff810844a9>] handle_irq_event+0x32/0x51 [166843.625863] [<ffffffff8108640b>] handle_fasteoi_irq+0x75/0x99 [166843.625866] [<ffffffff810039d7>] handle_irq+0x83/0x8b [166843.625868] [<ffffffff810036ad>] do_IRQ+0x48/0xa0 [166843.625871] [<ffffffff8155082b>] common_interrupt+0x6b/0x6b [166843.625872] <EOI> [<ffffffff812981e8>] ? acpi_safe_halt+0x22/0x35 [166843.625877] [<ffffffff812981e2>] ? acpi_safe_halt+0x1c/0x35 [166843.625879] [<ffffffff81298216>] acpi_idle_do_entry+0x1b/0x2b [166843.625881] [<ffffffff81298276>] acpi_idle_enter_c1+0x50/0x99 [166843.625884] [<ffffffff813b792a>] cpuidle_idle_call+0xed/0x171 [166843.625886] [<ffffffff81001257>] cpu_idle+0x55/0x81 [166843.625888] [<ffffffff81532a69>] rest_init+0x6d/0x6f [166843.625891] [<ffffffff81aa1aca>] start_kernel+0x329/0x334 [166843.625893] [<ffffffff81aa12a6>] x86_64_start_reservations+0xb6/0xba [166843.625894] [<ffffffff81aa139c>] x86_64_start_kernel+0xf2/0xf9 [166843.625896] handlers: [166843.625898] [<ffffffff812dc8de>] twl_interrupt [166843.625900] Disabling IRQ #16 It's related to problem? EDIT #2: Based on feedback in comments, here is more informations. cat /proc/interrupts 16: 390813 0 0 0 IO-APIC-fasteoi 3w-sas Controller model: [ 1.095350] 3ware Storage Controller device driver for Linux v1.26.02.003. [ 1.095467] 3ware 9000 Storage Controller device driver for Linux v2.26.02.014. [ 1.095641] LSI 3ware SAS/SATA-RAID Controller device driver for Linux v3.26.02.000. [ 1.095787] 3w-sas 0000:01:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16 [ 1.095881] 3w-sas 0000:01:00.0: setting latency timer to 64 [ 1.910801] 3w-sas: scsi0: Found an LSI 3ware 9750-4i Controller at 0xfe560000, IRQ: 16. [ 2.216537] 3w-sas: scsi0: Firmware FH9X 5.08.00.008, BIOS BE9X 5.07.00.011, Phys: 8. [ 2.216836] scsi 0:0:0:0: Direct-Access LSI 9750-4i DISK 5.08 PQ: 0 ANSI: 5 And motherboard: description: Motherboard product: P8H67-M vendor: ASUSTeK Computer INC.

    Read the article

  • Can admins monitor my activity locally even when I use a VPN?

    - by Arjun Create
    My school has one of those super overreacting web blockers (specifically Fortisnet) that blocks things that should be accessible by a high-school senior trying to research projects. Despite many students' complaints the administration's hands are tied due to parents' complaints. I have setup a VPN account from http://www.vpnreactor.com. With this I am able to bypass the blocker. I know this service hides my IP from websites and servers on the web. I also know that the school pays an IT guy just to monitor sites and network traffic that the students are using. Basically, will he be able to see my network traffic? More importantly, will he be able to trace it to my computer or its MAC address? I am connecting over Wi-Fi, not ethernet.

    Read the article

  • Links not opening in Windows 8 applications

    - by Martin Brown
    I have just upgraded my laptop from Windows 7 Ultimate with IE9 to Windows 8 with IE 10. When ever I try to open a web link in an application nothing happens. For example in the Windows 8 mail application it makes the email address in an email look like a link. When I click on these with the mouse the link changes to look like I have clicked on it, but aside from that nothing happens. I have tried making sure that IE 10 is my default web browser and been into IE10's settings and associated it with all the usual file types, but still no success. Has anyone got any ideas what might be causing this? I have a feeling this started when the Browser Choice update ran. I couldn't select which browser to install in that either. The install buttons just flashed when I clicked them and then nothing. PS I would have added an IE10 tag to this post, but apparently I don't have the reputation required to do so.

    Read the article

  • Fast swapping of production and staging in IIS

    - by Nathan Ridley
    I'm using IIS 7 on my own dedicated server. Let's say I have two web applications. One points to folder A, and one points to folder B. The first is used for production and the second is for staging. If I want to set up a scenario whereby I upload my aplication to staging, make sure everybody's happy, then swap the folders that each web application points at, thereby putting "staging" live and making the production environment the new staging environment, what's a good way to do this? I know Microsoft themselves use this methodology on their Azure platform and I've seen it used elsewhere too. How can I do it on my server with IIS7?

    Read the article

  • Corporate Wiki Organization - Technical Documentation

    - by Dave Jarvis
    Corporations have documents describing various aspects of their technical systems, including: Custom Applications Custom Development Frameworks Third Party Applications Accounting Bug Tracking Network Management How To Guides User Manuals Software Tools Web Browsers Development IDEs Graphics GIMP xv Text Editing File Transfer ncFTP WinSCP Hardware Servers Web Database Exchange File Network Devices Printers If you had to use a Wiki to manage the documentation, what other items would you add to the list, and how would you organize it? (For example, would Software Tools make more sense under Third Party Applications?) A few constraints: The structure should not go beyond three levels deep. Avoid the word "and" in favour of two different categories. Keep the structure general: it should appy as broadly as possible. Target audience is primarily technical, but could be visible by anyone.

    Read the article

  • How to run a service as a user who can't delete or update or create a file

    - by neeraj
    Mongodb is a web based console to try out Mongodb. I have created something similar to try out nodejs. In nodejs I am accepting user input and then I am performing eval on that command. Given the power of nodejs , someone from web console can create a file, delete files on the system or could execute 'rm -rf '. I was thinking will it be okay if I run node as a user called node. This user node will not have any privilege to write anything, create anything or update anything. The only access this user will have is read access. Will that work or that is too much of risk. What is a good strategy to handle such a situation?

    Read the article

  • Application Request Routing (ARR) - Single Server Reverse Proxy(ish) Setup

    - by Justin
    I have 1 webserver that has two .NET apps running on it. These are set up on the server as app1.mydomain.com and app2.mydomain.com. I would like to be able to take any request going to app1.mydomain.com/subfolder and rewrite it to app2.mydomain.com/subfolder using ARR. I am having difficulty getting this to work on a single server, and all the ARR examples on the net seem to imply that I require another server dedicated to ARR sitting in front of the two web servers. Is what I am attempting to do possible on one web server, and if so how?! Thanks all.

    Read the article

  • Save Website To Disk

    - by Christian
    Hello everyone! I have a very poor internet connection when I'm living at home. The only time I have a good internet is at college. When I get home, the most mundane task like opening a web-page becomes a five minute stress-test. So what I was thinking was to download the web-page, for example superdickery. I was wondering what the best method would be to download the entire image archive of the page? Would this be illegal, if I did this? It's just that I don't want to be frustrated every time I just want to load a simple jpeg image.

    Read the article

  • How to prevent mod_proxy from rewriting redirects into absolute URLs?

    - by Yang
    I have: nginx (port 80) reverse-proxying to apache2 (port 88) reverse-proxying to a web app (port 5001). However, when the web app responds with a redirect like Location: /foo, apache2 rewrites this into Location: http://host.com:88/sub/foo, even though port 88 is publicly inaccessible. I'd like it to just redirect to the relative URL Location: /sub/foo. Any ideas? My apache config (using mod_proxy_http, mod_proxy_html, mod_substitute): <Location /notes/> Allow from all ProxyPass http://127.0.0.1:5001/ SetOutputFilter proxy-html ProxyPassReverse / ProxyHTMLURLMap / /notes/ RequestHeader unset Accept-Encoding AddOutputFilterByType SUBSTITUTE application/atom+xml Substitute "s|127.0.0.1:5001|host.com/notes|" </Location>

    Read the article

< Previous Page | 867 868 869 870 871 872 873 874 875 876 877 878  | Next Page >