Search Results

Search found 10391 results on 416 pages for 'sys dm exec requests'.

Page 148/416 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Hosting a javascript api file for third party sites the way sharethis, uservoice, analytics do it.

    - by Dayson
    I'm preparing to launch a service soon which will provide third party websites a widget. The widget requires my javascript file in the website's code. Exactly the same way services like analytics, uservoice, sharethis, getclicky, etc provide you with a javascript snippet to add to your page. Therefore, my javascript file is going to be hotlinked by tons of websites which possibly receive a lot of requests too. I need advice/opinions on the following aspects: What's the right location for hosting this file? Should I use a sub-domain for it? I was thinking of something like http://api.myservice.com/js/foo.js . Remember, once websites start embedding this file, its location CANNOT change under any circumstances. Right now we can afford just one dedicated server. So I have minified my file, enabled gzip and plan to use some good cache control headers through apache. Also, in the near future when the requests pickup, I will use a http proxy like Varnish. Is this a good plan for the near future? Should I be considering a CDN in the future (since we can't afford it now)? If so how do I make sure we're prepared to migrate to it without breaking services. Pros/Cons of moving just this file to a CDN? Also, since its just one javascript file(50kb), any affordable CDN so we could consider it in the beginning itself? Any other word of advice I could use? Anything I shouldn't overlook at this stage which I would regret later? (both in terms of server + javascript ajax limitations) Thanks in advance.

    Read the article

  • How to bypass firewall to connect to a proxy server?

    - by Bruce
    I am conducting a small experiment on my office network. I have setup a proxy server on my desktop machine (connected to my LAN) and I have volunteers access the internet via my proxy server. Everything is working well. The problem is people cannot connect to the proxy server through their laptops. I asked my network admin and he said the wireless network has a firewall which prevents users from connecting to my proxy. He said I could tunnel the traffic or use SSH though. I am afraid I do not understand fully what is going on. Is there a way by which users connected on the wireless network can connect to my desktop? I am using FreeProxy on Windows as my proxy server: http://www.handcraftedsoftware.org/index.php?page=download FreeProxy allows me to create a SOCKS 4/4a/5 proxy. Is that what I need? Part of the experiment involves logging the URL requests of the users. I am doing a measurement study. So, any solution must allow me to log the URL requests of users. Also, what changes do I need to make in the browser configuration.

    Read the article

  • How to remove request blocking on apache reverse proxy after failure of backend before asking backen

    - by matnagel
    I am working on an apache2 reverse proxy vhost. When the server behind apache is down, the first request to apache shows the error page of course. But at subsequent requests it seems apache delays for some time before asking the backend server again. During all this time (which is short but in development I don't want a delay at all) only the apache error page is shown to the browser, although the backend server is already up. Where is this setting in apache, what is this behaviour, and how can I set the delay time to zero? Edit: I am not trying to change the timeout for a single request. I want to change the blocking time. It is my experience that apache blocks further requests for a certain time before asking a backend server again that has failed once. Edit2: This is what apache delivers: Service Temporarily Unavailable The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later. Apache/2.2.8 (Ubuntu) PHP/5.2.4-2ubuntu5.7 with Suhosin-Patch proxy_html/3.0.0 Server at localhost Port 80 After hitting Ctrl-R in firefox for 60 seconds the page finally appears.

    Read the article

  • Need help with some IIS7 web.config compression settings.

    - by Pure.Krome
    Hi folks, I'm trying to configure my IIS7 compression settings in my web.config file. I'm trying to enable HTTP 1.0 requests to be gzip. MSDN has all the info about it here. Is it possible to have this config info in my own website's web.config file? Or do i need to set it at an application level? Currently, I have that code in my web.config... <system.webServer> <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="true" /> ... other stuff snipped ... </system.webServer> It's not working :( HTTP 1.1 requests are getting compressed, just not 1.0. That MSDN page above says that it can be used in :- Machine.config ApplicationHost.config Root application Web.config Application Web.config Directory Web.config So, can we set these settings on a per-website-basis, programatically in a web.config file? (this is an Application Web.config file...) What have i done wrong? cheers :) EDIT: I was asked how i know HTTP1.0 is not getting compressed. I'm using the Failed Request Tracing Rules, which reports back:- DYNAMIC_COMPRESSION_START DYNAMIC_COMPRESSION_NOT_SUCESS Reason: 3 Reason: NO_COMPRESSION_10 DYNAMIC_COMPRESSION_END

    Read the article

  • How To Set Up A Loadbalanced High-Availability Apache Cluster On Windows

    - by bReAd
    Setting up a two-node Apache web server cluster that provides high-availability. In front of the Apache cluster we create a load balancer that splits up incoming requests between the two Apache nodes. Because we do not want the load balancer to become another “Single Point Of Failure”, we must provide high-availability for the load balancer, too. Therefore our load balancer will in fact consist out of two load balancer nodes that monitor each other using heartbeat, and if one load balancer fails, the other takes over silently. The following setup is proposed: Apache node 1: webserver1.example.com (webserver1) – IP address: 192.168.0.101; Apache document root: /var/www Apache node 2: webserver2.example.com (webserver2) – IP address: 192.168.0.102; Apache document root: /var/www Load Balancer node 1: loadb1.example.com (loadb1) – IP address: 192.168.0.103 Load Balancer node 2: loadb2.example.com (loadb2) – IP address: 192.168.0.104 Virtual IP Address: 192.168.0.105 (used for incoming requests) Currently, there are many solutions for Linux machines and there aren't any on windows. I've tried searching a long time for solutions on Windows platform How do I create the virtual IP in windows and perform monitoring and make the load balancer listen to the virtual IP Address?

    Read the article

  • Is it possible to rate limit based on host headers? i.e. not just on ip address

    - by Blankman
    I have a web service endpoint that I am building where people will post an xml file to, and it will really get pounded with over 1K requests per second. Now they are sending in these xml files via http post, but a good majority of them will be rate limited. The problem is, the rate limiting will be done by the web application by looking up the source_id in the xml, and if it is over x requests per minute, it will not be processed further. I was wondering if I could do rate limit checking earlier in the processing somehow and thus save the 50K file going threw the pipeline to my web servers and eating up resources. Could a load balancer make a call out to verify rate usage somehow? If this is possible, I could maybe put the source_id in a host header so even the XML file doesn't have to be parsed and loaded into memory. Is it possible to just look at host headers and not load up the entire 50K xml file into memory? I really appreciate your insights as this takes more knowledge of the entire tcp/ip stack etc.

    Read the article

  • Lighttpd - byte range request doesn't work. can't stream mp4

    - by w-01
    Am attempting to use the lastest flowplayer. (if it could work it would be pretty awesome btw) http://flowplayer.org One of the cool things about it is it uses the new HTML5 video element and supports random seeking/playback. In order to do this, you need a byte range request capable server on the backend. Luckily I'm using Lighttpd 1.5.0 on the backend. Unfortunately the current behavior is that when I do a random seek, the video simply restarts itself from the beginning. the docs say: "For HTML5 video you don't have to do any client side configuration. If your server supports byte range requests then seeking should work on the fly. Most servers including Apache, Nginx and Lighttpd support this." On my page, using chrome web developer tools, i can see when the video is requested, the server response headers indicate it is able to acce[t byte ranges. Accept-Ranges:bytes when I do random seek in the player, I can see that that byte ranges are request appropriately in the request header: Range: bytes=5668-10785 I can also verify the moov atom is at the front of the video file. My question here is if there is something else on the lighttpd side i'm missing in order to enable byte-range requests? The reason i ask is because the current behavior suggests that the lighttpd simply doesn't understand the byte range request and is just reserving the video from the beginning. Update it's clearer to put this here. As per RJS' suggestion I ran a curl command. in the response it looks like lighttpd is working as expected. Content-Range: bytes 1602355-18844965/18844966 Content-Length: 17242611

    Read the article

  • DD-WRT router causing IP address conflicts across network

    - by r.tanner.f
    My DD-WRT router has lost its mind! I just set up two DD-WRT routers, one as a WAP (working fine) and one in Client Bridge (routed) mode (the problem). Not long after setup I started seeing IP address conflicts on other machines. The event log always points the finger at my Client Bridge router's MAC address. Neighbour table overflow The log on my router is flooded with Neighbour table overflow errors. These start a minute or two after boot. The network is rather large, with +200 IP addresses being used in this subnet. The other router shows no such errors. Mass ARP requests from 1.1.1.1 I'm also seeing constant ARP requests (with the problem router's MAC address) from 1.1.1.1. Seems like it's bugging everything on the network for its MAC address and then promptly forgetting it (or never receiving a response). Configuration: Model: Buffalo N600 Firmware: DD-WRT v24SP2-MULTI (03/21/11) Wireless Mode: Client Bridge (routed) I'm not sure what configuration details are relevant and I'd rather not have comments flooded, so just ping me in this chat if you want to know something. Why is my router stealing IP addresses and how can I stop it?

    Read the article

  • multiple streaming servers behind a Bastion Host

    - by Bond
    I am using open source streaming server Red5 on multiple servers. Which are running behind a bastion host. the world knows these sites as http://site1.mydomain.com http://site2.mydomain.com http://site3.mydomain.com http://site4.mydomain.com To reach the front end server is using Apache Reverse Proxy. I am also having video streaming on each of these websites using rtmp. To be able to reach the streaming server I embed a javascript in HTML pages as follows Code: <embed ..... var="rtmp://site1.my_domain.com" > the problem is the website are many site1.mydomain.com site2.mydomain.com site3.mydomain.com site4.mydomain.com each on a separate physical server. Each of these four have their own Red5 installations the front end to each of these four is a common Bastion Host. If I run rtmp on each of the subdomains at a different port how will I make sure a request such as rtmp://site1.mydomain.com rtmp://site2.mydomain.com goes to their respective servers. from the front end server. What do I need to handle in this case ? IPTABLES came to mind instantly but from the client browser on internet when some one requests rtmp://site1.mydomain.com how will I make sure this rtmp request is mapped to a port different than 1935 as there are three other streaming servers which are also to respond to their respective requests ?

    Read the article

  • Configuring apache and php to handle many connections

    - by Marc
    My preliminary setup is like this. Two QuadCore 8GB servers running debian 6, with php and apache, One QuadCore 16GB server running debian 6, with mysql My plan is to have one 8Gb server to act as a proxy server, using vertx java to handle connections. I will let vertx use HttpClient to send web requests to the second 8GB server. This would have apache installed and use php to deliver any information that it gets from the mysql server on the third, 16GB server. The main reason I want this setup is to have things separated, so the "proxy" will be the only way to access the system, as the other two server will only be reachable from the local network. I can have the vertx proxy handle 5000+ concurrent connections, but, I don't know how to configure apache to handle all the requests coming from the proxy. Php will connect over mysqli with persistent connection pool of 500-800 connections, the mysql server seems not to have any issues on this part. In previous projects, the apache part was always causing issues, no matter how I set it up. I might not fully understand how to setup apache, since normally apache should handle many concurrent connections, but it does seem to now.

    Read the article

  • Unable to access stackexchange sites from this system

    - by Sandeepan Nath
    Earlier, I was not able to access most of the stackexchange sites like stackoverflow, programmers.SE etc. on my home Windows XP system. I was able to access only a few like http://meta.stackexchange.com and not even http://www.meta.stackexchange.com (note the www). I tried many other sites like http://www.stackoverflow.com, http://area51.stackexchange.com/ but was getting page not found errors on all browsers. Even pinging from terminal was saying destination host unreachable. I did not check recently but may be all SE sites are unreachable now. I was clueless about what could be the issue. I thought some firewall issue? So, I stopped AVG antivirus's firewall, then completely uninstalled it and even turned of windows firewall. But still not reachable even after fresh installation of Windows 7. Then I noticed a "Too many requests" notice on google. This page - http://www.google.co.in/sorry/?continue=http://www.google.co.in/# I don't know why this appeared but I guess somehow too many requests might have been sent to these sites and they blocked me. But in that case, SE would be smart enough to show a captcha like google. So, how to confirm the problem and fix it. Similar questions like these don't look solved yet - Unable to access certain websites Unable to Access Certain Websites I have lately started actively participating in lots of SE sites. There are new new questions popping up in my mind every time and I am not able to ask them. Please help! Thanks

    Read the article

  • Obey server_name in Nginx

    - by pascal
    I want nginx/0.7.6 (on debian, i.e. with config files in /etc/nginx/sites-enabled/) to serve a site on exactly one subdomain (indicated by the Host header) and nothing on all others. But it staunchly ignores my server_name settings?! In sites-enabled/sub.domain: server { listen 80; server_name sub.domain; location / { … } } Adding a sites-enabled/00-default with server { listen 80; return 444; } Does nothing (I guess it just matches requests with no Host?) server { listen 80; server_name *.domain; return 444; } Does prevent Host: domain requests from giving results for Host: sub.domain, but still treats Host: arbitrary as Host: sub-domain. The, to my eyes, obvious solution isn't accepted: server { listen 80; server_name *; return 444; } Neither is server { listen 80 default_server; return 444; } Since order seems to be important: renaming 00-default to zz-default, which, if sorted, places it last, doesn't change anything. But debian's main config just includes *, so I guess they could be included in some arbitrary file-system defined order? This returns no content when Host: is not sub.domain as expected, but still returns the content when Host is completely missing. I thought the first block should handle exactly that case!? Is it because it's the first block? server { listen 80; return 444; } server { listen 80; server_name ~^.*$; return 444; }

    Read the article

  • How to clean up an unprocessed orphan inode list?

    - by bmk
    I tried to mount a formerly readonly mounted filesystem read-writeable: mount -o remount,rw /mountpoint Unfortunately it did not work: mount: /mountpoint not mounted already, or bad option dmesg reports: [2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead A umount does not work, too: umount /mountpoint umount: /mountpoint: device is busy. (In some cases useful info about processes that use the device is found by lsof(8) or fuser(1)) Unfortunately neither lsof of fuser don't show any process accessing something located under the mount point. So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?

    Read the article

  • ProCurve 1800 switch issue

    - by user98651
    I recently deployed ProCurve 1800-24G switches in place of some older ProCurve 2424M switches in my network. However, I'm having a serious problem with the switch connected to the router. It seems, every night when our Windows 2008 R2 server (off site) runs a backup to a iSCSI target (on site) [facilitated through a PPTP tunnel] the LAN loses connectivity with the router. To clarify, there is only one router which is connected to the switch affected by this problem. The only way to resolve the issue is to either reboot the router or pull the ethernet cable that goes to the router and plug it back in. During the outage, clients cannot receive DHCP requests, DNS requests, ping, or do anything else with the router in this state. Now, neither the switch or router are configured extensively and the issue only seems to have surfaced with the new switch in place. I have tried a number of things including replacing cables, rebooting and checking the switch configuration (it is literally as basic as you can get at this point-- flat LAN, no trunking). Interestingly, the router shows (accessed externally) no changes in configuration or status during this state but similarly cannot ping or access other hosts on the network. This issue occurs in different stages of backup (ie, different amounts transferred). I've also dumped packets from the switch into WireShark but cannot seem to find any anomaly yet (I'm looking at packets around the time the issue appeared and at the time when I reset the NIC). Any suggestions for what to look for? Ideas on what could be causing this? I'm seeing some transmit/receive errors on the NIC from both the router and switch side but nothing serious when compared to the total packet counts. I'm seriously doubting hardware at this point, as I have tried another switch, different cables, and a different NIC on the router.

    Read the article

  • Varnish configuration, NamevirtualHosts, and IP Forwarding

    - by Brent
    I currently have a bunch of NameVirtualHost based websites, load balanced between 3 apache2 servers using ldirectord. I would like to insert varnish as a reverse-web-proxy between ldirectord and apache in the following way: a request comes in to ldirectord it is then load balanced between the 3 apache2 servers and varnish, with a weight of 1 for the webservers, and 99 for varnish (so if varnish is rebooted, the webservers will take over seamlessly) varnish will then load balance its requests between my apache2 servers. However, the varnish part is not working. I wonder whether this has to do with the fact that my apache servers use x.x.x.x:80 for their NameVirtualHosts, instead of *:80? (they have to do this, since each server hosts multiple IP addresses) Or perhaps it has to do with the need for IP Forwarding to be set up on the varnish server? (I did echo 1 /proc/sys/net/ipv4/ip_forward on this server, is that sufficient?) How can I debug this problem? ldirectord doesn't produce logs of what it does with each request (and if it did, I would be overwhelmed with information since I'm serving hundreds of requests per second) varnish log shows the ldirectord server connecting to it every 5 seconds, but nothing else. I have set up a test site using this configuration, but it fails - no apache access logs, no applicable varnish logs.

    Read the article

  • Forward Apache to Django dev server

    - by Alex Jillard
    I'm trying to get apache to forward all requests on port 80 to 127.0.0.1:8000, which is where the django dev server runs. I think I have it forwarding properly, but there must be an issue with 127.0.0.1:8000 not being run by apache? I'm running the django dev server in an ubuntu vmware instance, and I'd other people in the office to see the apps in development without having to promote anything to our actual dev/staging servers. Right now the virtual machine picks up an IP for itself, and when I point a browser to that url with the defualt apache config, I get the default apache page. I've since changed the httpd.conf file to the following to try and get it to forward the requests to the django dev server: ServerName localhost <Proxy *> Order deny,allow Allow from all </Proxy> <VirtualHost *> ServerName localhost ServerAdmin [email protected] ProxyRequests off ProxyPass * http://127.0.0.1:8000 </VirtualHost> All I get are 404s with this, and in error.log I get the following (192.168.1.101 is the IP of my computer 192.168.1.142 is the IP of the virtual machine): [Mon Mar 08 08:42:30 2010] [error] [client 192.168.1.101] File does not exist: /htdocs

    Read the article

  • Is there a way to use something similar to a capture group for apache2 server name

    - by Zipper
    I have a server that sits behind an AWS load balancer. The LB can't do automatic redirect from HTTP to HTTPs, and the LB is doing my SSL. So I need to setup apache on my servers to redirect any request on port 80 to https://FOOBAR m where FOOBAR is the domain that came in. I haven't been able to find a way of doing that so far. I'm an apache newb though. What I'm trying to do is something similar to this. I'll use regex as an example <VirtualHost *:80> ServerName (.*) Redirect / https://\1 </VirtualHost> If there's a better way to do this, please let me know. EDIT: Sorry I should have explained why this is happening. I actually have a tomcat server running my app on port 8080, and the LB points to that. From what I can tell so far my requests come in on http (which is expected), but when my app server sends redirects (for login purposes) it tries to redirect to http, instead of https. I haven't had a chance to fully investigate this, but I wanted to work around it for now by point the LB to point to the apache server, and have any port 80 requests redirect to 443. EDIT2: The other reason I'm interested in doing this, is that since the LB can't do the redirect, I need to have another redirect mechanism in place to tell the browser to go to https://FOOBAR

    Read the article

  • Recover RAID 5 data after created new array instead of re-using

    - by Brigadieren
    Folks please help - I am a newb with a major headache at hand (perfect storm situation). I have a 3 1tb hdd on my ubuntu 11.04 configured as software raid 5. The data had been copied weekly onto another separate off the computer hard drive until that completely failed and was thrown away. A few days back we had a power outage and after rebooting my box wouldn't mount the raid. In my infinite wisdom I entered mdadm --create -f... command instead of mdadm --assemble and didn't notice the travesty that I had done until after. It started the array degraded and proceeded with building and syncing it which took ~10 hours. After I was back I saw that that the array is successfully up and running but the raid is not I mean the individual drives are partitioned (partition type f8 ) but the md0 device is not. Realizing in horror what I have done I am trying to find some solutions. I just pray that --create didn't overwrite entire content of the hard driver. Could someone PLEASE help me out with this - the data that's on the drive is very important and unique ~10 years of photos, docs, etc. Is it possible that by specifying the participating hard drives in wrong order can make mdadm overwrite them? when I do mdadm --examine --scan I get something like ARRAY /dev/md/0 metadata=1.2 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b name=<hostname>:0 Interestingly enough name used to be 'raid' and not the host hame with :0 appended. Here is the 'sanitized' config entries: DEVICE /dev/sdf1 /dev/sde1 /dev/sdd1 CREATE owner=root group=disk mode=0660 auto=yes HOMEHOST <system> MAILADDR root ARRAY /dev/md0 metadata=1.2 name=tanserv:0 UUID=f1b4084a:720b5712:6d03b9e9:43afe51b Here is the output from mdstat cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[0] sdf1[3] sde1[1] 1953517568 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU] unused devices: <none> fdisk shows the following: fdisk -l Disk /dev/sda: 80.0 GB, 80026361856 bytes 255 heads, 63 sectors/track, 9729 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bf62e Device Boot Start End Blocks Id System /dev/sda1 * 1 9443 75846656 83 Linux /dev/sda2 9443 9730 2301953 5 Extended /dev/sda5 9443 9730 2301952 82 Linux swap / Solaris Disk /dev/sdb: 750.2 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000de8dd Device Boot Start End Blocks Id System /dev/sdb1 1 91201 732572001 8e Linux LVM Disk /dev/sdc: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00056a17 Device Boot Start End Blocks Id System /dev/sdc1 1 60801 488384001 8e Linux LVM Disk /dev/sdd: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000ca948 Device Boot Start End Blocks Id System /dev/sdd1 1 121601 976760001 fd Linux raid autodetect Disk /dev/dm-0: 1250.3 GB, 1250254913536 bytes 255 heads, 63 sectors/track, 152001 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/dm-0 doesn't contain a valid partition table Disk /dev/sde: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x93a66687 Device Boot Start End Blocks Id System /dev/sde1 1 121601 976760001 fd Linux raid autodetect Disk /dev/sdf: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xe6edc059 Device Boot Start End Blocks Id System /dev/sdf1 1 121601 976760001 fd Linux raid autodetect Disk /dev/md0: 2000.4 GB, 2000401989632 bytes 2 heads, 4 sectors/track, 488379392 cylinders Units = cylinders of 8 * 512 = 4096 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 524288 bytes / 1048576 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Per suggestions I did clean up the superblocks and re-created the array with --assume-clean option but with no luck at all. Is there any tool that will help me to revive at least some of the data? Can someone tell me what and how the mdadm --create does when syncs to destroy the data so I can write a tool to un-do whatever was done? After the re-creating of the raid I run fsck.ext4 /dev/md0 and here is the output root@tanserv:/etc/mdadm# fsck.ext4 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Superblock invalid, trying backup blocks... fsck.ext4: Bad magic number in super-block while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 Per Shanes' suggestion I tried root@tanserv:/home/mushegh# mkfs.ext4 -n /dev/md0 mke2fs 1.41.14 (22-Dec-2010) Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=128 blocks, Stripe width=256 blocks 122101760 inodes, 488379392 blocks 24418969 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=0 14905 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848 and run fsck.ext4 with every backup block but all returned the following: root@tanserv:/home/mushegh# fsck.ext4 -b 214990848 /dev/md0 e2fsck 1.41.14 (22-Dec-2010) fsck.ext4: Invalid argument while trying to open /dev/md0 The superblock could not be read or does not describe a correct ext2 filesystem. If the device is valid and it really contains an ext2 filesystem (and not swap or ufs or something else), then the superblock is corrupt, and you might try running e2fsck with an alternate superblock: e2fsck -b 8193 <device> Any suggestions? Regards!

    Read the article

  • Server extremely lags and logs bunch of 'internal dummy connection'

    - by Dmitry
    Having a web-server (don't know actually whoset it up, it's my heritage). Few hours ago it started working very (extremely!) slow, mysqld oftenly fails requests. /var/log/mysqld.log is empty (well, it says, mysqld started, and so on, but nothing regarding today) /var/log/apache2/access_log is full of such lines: ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" ::1 - - [30/Nov/2011:10:15:05 +0100] "GET / HTTP/1.0" 200 1 "-" "Apache/2.2.3 (Linux/SUSE) (internal dummy connection)" Guys, what's that? How to heal this? I read internal dummy connections happen sometimes, but sending internal requests at 1000/sec frequency isn't freaking normal!How to find out the reason of this?

    Read the article

  • Webservice randomly dropping connections - possibly due to firewall nonevent data?

    - by adam
    I have a hosted webapp which requests data from a REST webservice in our office. Each page calls one (or several) webservices, which go from our host, via our firewall (a Watchguard Firebox) to a server in our office. All of a sudden, the app has dramatically slowed. We have determined that the webservice is timing out at random when called externally (it's fine when called within the office network). I'm pretty certain it's our connection which is dropping the webservice call, so I've written a quick php/curl script which calls the webservice over many iterations and shows the various timings. Below is an example output, showing both a failed and a successful call (with a 5 second timeout): http_code namelookup_time connect_time pretransfer_time starttransfer_time total_time 1 0 0.000096 0.0342 0.0000 0.0000 0.0342 2 200 0.000052 0.0332 0.1327 0.1751 0.1752 As per iteration #1 above, failed requests seem to be failing between connect and pretransfer. I'm not sure if this shows that the connection is successfully past the firewall, or could the firewall still cause an issue? Our firewall is showing a series of nondata event log messages for the relevant access rule. Our IT team tells me these are routine, although I can find no mention of these in Google. I'm not sure if this fits in between connect and pretransfer. Having elinated the webservice server (by testing internally) and the live webapp (by testing different code on different external servers, I am left suspecting the connection to the office. Could the firebox nondata events be causing a problem between connect and pretransfer?

    Read the article

  • RRAS Public Address Pool on Windows Server 2008

    - by Art
    I have a Windows 2008 server with two NICs running RRAS and a small public website. It also does NAT for several other PCs on my network and everything works great. I have a block of 5 public static IPs from my ISP, one of which is bound to the public NIC in the Windows 2008 server. I would like to assign one of the remainging 4 public IPs to a machine on my private network. I thought I could do this by going into RRAS, selecting NAT under IPv4 and then adding the public IP address to the address pool and specifying a reservation for the machine I would like to use that address by adding its private ip address. When I do this, the machine I reserved the public IP address for seems to loose all outside network connectivity. I can still ping other PCs on my 192.168.0.* net, but anything outside is no longer reachable. When I remove the reservation, everything seems to work. After setting the reservation and right clicking on the external public interface and selecting 'Show Mappings' I can see outbound requests from my private address with the desired associated private address, however I do not see any inbound requests. What am I doing wrong/missing?

    Read the article

  • Port forwarding on D-Link DIR-615 super-slow, useless

    - by Jaroslav Záruba
    Hello I have replaced my old router with DIR-615 from D-Link, and now the port forwarding is so slow it makes the router practically useless for requests coming from outside of my network. Accessing the router itself (admin UI) from outside is without any issues, no delay whatsoever. But when I try to access a service residing on any of the computers in my network from outside the requests take minutes and minutes. (E.g. I can see source of my GWT-app main page, but loading additional CSS and JS files takes years.) If anyone could recommend any further diagnostics I should do to figure out what is happening it would be great. Few notes: happens with more services (web-app on Tomcat, viewing directory index via Apache) it does not make a difference whether the service is hosted on wired or wireless PC accessing the service on a localhost works fine, as does any 'inner' communication turning off firewall on target PC does not make difference either (makes sense) when I replace this router with the old one (both 192.168.1.1) everything works fine I see nothing suspicious in the router's log I believe I have the latest firmware (4.11) DIR-615 sucks, it already died once completely Regards Jarda Z.

    Read the article

  • Apahe configuration with virtual hosts and SSL on a local network

    - by Petah
    I'm trying to setup my local Apache configuration like so: http://localhost/ should serve ~/ http://development.somedomain.co.nz/ should serve ~/sites/development.somedomain.co.nz/ https://development.assldomain.co.nz/ should serve ~/sites/development.assldomain.co.nz/ I only want to allow connections from our local network (192.168.1.* range) and myself (127.0.0.1). I have setup my hosts file with: 127.0.0.1 localhost 255.255.255.255 broadcasthost ::1 localhost fe80::1%lo0 localhost 127.0.0.1 development.somedomain.co.nz 127.0.0.1 development.assldomain.co.nz 127.0.0.1 development.anunuseddomain.co.nz My Apache configuration looks like: Listen 80 NameVirtualHost *:80 <VirtualHost development.somedomain.co.nz:80> ServerName development.somedomain.co.nz DocumentRoot "~/sites/development.somedomain.co.nz" DirectoryIndex index.php <Directory ~/sites/development.somedomain.co.nz> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <VirtualHost localhost:80> DocumentRoot "~/" ServerName localhost <Directory "~/"> Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> <IfModule mod_ssl.c> Listen *:443 NameVirtualHost *:443 AcceptMutex flock <VirtualHost development.assldomain.co.nz:443> ServerName development.assldomain.co.nz DocumentRoot "~/sites/development.assldomain.co.nz" DirectoryIndex index.php SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP:+eNULL SSLCertificateFile /Applications/XAMPP/etc/ssl.crt/server.crt SSLCertificateKeyFile /Applications/XAMPP/etc/ssl.key/server.key BrowserMatch ".*MSIE.*" \ nokeepalive ssl-unclean-shutdown \ downgrade-1.0 force-response-1.0 <Directory ~/sites/development.assldomain.co.nz> SSLRequireSSL Options Indexes FollowSymLinks ExecCGI Includes AllowOverride All Order allow,deny Allow from all </Directory> </VirtualHost> </IfModule> http://development.somedomain.co.nz/ http://localhost/ and https://development.assldomain.co.nz/ work fine. The problem is when I request http://development.anunuseddomain.co.nz/ or http://development.assldomain.co.nz/ it responds with the same as http://development.somedomain.co.nz/ I want it to deny all requests that do not match a virtual host server name and all requests to a https host that are requested with http PS I'm running XAMPP on Mac OS X 10.5.8

    Read the article

  • eXist-db: can't start webstart client on a closed port, reverse proxied via apache

    - by rvdb
    I am configuring an Apache HTTP server so it reverse proxies requests starting with /app/ to an eXist-db instance running in a Tomcat server, on port 8082. This port has been closed in the firewall and is inaccessible to the outer world. Following the eXist documentation, I have following rules in place in my httpd.conf file: ProxyPass /apps/ http://localhost:8082/ ProxyPassReverse /apps/ http://localhost:8082/ ProxyPassReverseCookiePath /apps/ / All goes well for requests to e.g. 'http://mydomain/apps/exist/index.xml'. Yet, the webstart client (accessible at 'http://localhost:8082/exist/webstart/exist.jnlp' on the web server) doesn't work behind the proxy. While 'http://mydomain/apps/exist/webstart/exist.jnlp' does generate a valid exist.jnlp file, that file can't be executed. The reason seems quite obvious: apparently, the eXist-db instance generating the exist.jnlp file only sees the proxied request as: 'http://localhost:8082/exist/webstart/exist.jnlp'. Yet, since the exist.jnlp file is executed on the client, that reference is meaningless (unless the client computer happens to have an eXist-db instance running on that port). Executing the exist.jnlp file hence fails with a 'connection refused' error. Yet, there's no problem at all connecting a local eXist-db Java client to the proxied eXist instance with the URL xmldb:exist://mydomain/apps/exist/xmlrpc. The problem lies in generating the webstart exist.jnlp file, which seems to need access to a publicly accessible URL. However, opening port 8082 and replacing the Proxy references to 'http://localhost:8082' with 'http://mydomain:8082' IMO rather destroys the point of reverse proxying. Do others have had success reverse proxying eXist-db on a closed port behind Apache? Are there perhaps some Proxy configuration settings I have overlooked (I'm no expert at all) that can make eXist see the original request instead of the proxied one? Kind regards, Ron

    Read the article

  • mod_rewrite REQUEST_FILENAME doesn't contain absolute path

    - by Paul Dixon
    I have a problem with a file test operation in a mod_rewrite RewriteCond entry which is testing whether %{REQUEST_FILENAME} exists. It seems that rather than %{REQUEST_FILENAME} being an absolute path, I'm getting a path which is rooted at the DocumentRoot instead. Configuration I have this inside a <VirtualHost> block in my apache 2.2.9 configuration: RewriteEngine on RewriteLog /tmp/rewrite.log RewriteLogLevel 5 #push virtually everything through our dispatcher script RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^/([^/]*)/?([^/]*) /dispatch.php?_c=$1&_m=$2 [qsa,L] Diagnostics attempted That rule is a common enough idiom for routing requests for non-existent files or directories through a script. Trouble is, it's firing even if a file does exist. If I remove the rule, I can request normal files just fine. But with the rule in place, these requests get directed to dispatch.php Rewrite log trace Here's what I see in the rewrite.log init rewrite engine with requested uri /test.txt applying pattern '^/([^/]*)/?([^/]*)' to uri '/test.txt' RewriteCond: input='/test.txt' pattern='!-f' => matched RewriteCond: input='/test.txt' pattern='!-d' => matched rewrite '/test.txt' -> '/dispatch.php?_c=test.txt&_m=' split uri=/dispatch.php?_c=test.txt&_m= -> uri=/dispatch.php, args=_c=test.txt&_m= local path result: /dispatch.php prefixed with document_root to /path/to/my/public_html/dispatch.php go-ahead with /path/to/my/public_html/dispatch.php [OK] So, it looks to me like the REQUEST_FILENAME is being presented as a path from the document root, rather than the file system root, which is presumably why the file test operator fails. Any pointers for resolving this gratefully received...

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >