Search Results

Search found 17977 results on 720 pages for 'someone smiley'.

Page 524/720 | < Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • how many sites IIS 6 can handle

    - by Sarah Nasir
    Is there a limit for creating Sites in IIS. i have searched and some forums have it in discussion which says there is no limit. Someone mentioned that he has created upto 100,000 sites in IIS 6 but i dont know his server specs though. Personally i feel that whatever the limit of IIS, the resources will be run out well before the limit reaches. how do big sites like blogger and wordpress handle a huge number of sites on their server. Questions: 1) Is there an upper limit for IIS 6.0? if yes then what is it 2) What should be a good number of requests IIS should serve for a decent server? (I am not talking about dynamic requests on server or logs.) 3) Is there a way I can do the test run on my cloud to test the capability of my server. what factors should i keep in view. db request, page size, disk read/writes etc ? Response shall be highly appreciated.

    Read the article

  • Can a pool of memcache daemons be used to share sessions more efficiently?

    - by Tom
    We are moving from a 1 webserver setup to a two webserver setup and I need to start sharing PHP sessions between the two load balanced machines. We already have memcached installed (and started) and so I was pleasantly surprized that I could accomplish sharing sessions between the new servers by changing only 3 lines in the php.ini file (the session.save_handler and session.save_path): I replaced: session.save_handler = files with: session.save_handler = memcache Then on the master webserver I set the session.save_path to point to localhost: session.save_path="tcp://localhost:11211" and on the slave webserver I set the session.save_path to point to the master: session.save_path="tcp://192.168.0.1:11211" Job done, I tested it and it works. But... Obviously using memcache means the sessions are in RAM and will be lost if a machine is rebooted or the memcache daemon crashes - I'm a little concerned by this but I am a bit more worried about the network traffic between the two webservers (especially as we scale up) because whenever someone is load balanced to the slave webserver their sessions will be fetched across the network from the master webserver. I was wondering if I could define two save_paths so the machines look in their own session storage before using the network. For example: Master: session.save_path="tcp://localhost:11211, tcp://192.168.0.2:11211" Slave: session.save_path="tcp://localhost:11211, tcp://192.168.0.1:11211" Would this successfully share sessions across the servers AND help performance? i.e save network traffic 50% of the time. Or is this technique only for failovers (e.g. when one memcache daemon is unreachable)? Note: I'm not really asking specifically about memcache replication - more about whether the PHP memcache client can peak inside each memcache daemon in a pool, return a session if it finds one and only create a new session if it doesn't find one in all the stores. As I'm writing this I'm thinking I'm asking a bit much from PHP, lol... Assume: no sticky-sessions, round-robin load balancing, LAMP servers.

    Read the article

  • SATA Driver for Acer Aspire One D257

    - by Robert Niestroj
    i have a Acer Aspire One D257. In this netbook the hard disk is defect so i bought a new one. Now i want to reinstall Windows 7. Im using an external DVD Drive plugged into USB. The Windows 7 DVD is staring, Win7 setup is starting and when it comes to Hard Drive options it says that no drive was detected and i should try search for drivers. It shows me this window: Screenshot from web Now i cant find the right drivers for this netbook to continue with the installation. The laptop has the newest BIOS - 1.15, it is reset to factory default settings except that i enabled the Boot Menu prompt with F12. From the Acer Support Website i've downloaded the SATA AHCI Driver and the Chipset Driver. I unpacked both to a USB flashdrive in seperate folders. When i select the SATA AHCI Driver it does not find any drivers. When i uncheck the checkbox "Hide drivers that are not compatible with hardware on this computer" it shows one driver: Acer HWID (path_to\1.inf). When i continue with this driver i got an error message that says something like: No new devices found. Check if the driver files are on the installation disk. When i show him the Chipset Driver it sees a lot more driver. When i uncheck the checkbox "Hide drivers that are not compatible with hardware on this computer" it show some drivers: Intel N10 Family DMI Bridge Intel N10/ICH7 Family PCI Express Root Port Intel N10/ICH7 SMBUS Controller Intel N10/ICH7 Family USB Universal Host Controller Intel N10/ICH7 Family USB2 Enhanced Host Controller Intel N10/ICH7 Family Interface LPC Controller When i uncheck this checkbox i get a lot more drivers, and some SATA Drivers but the also do not work. I get the same error message as before. Can someone help me find a driver that should work or am i doing anything else wrong?

    Read the article

  • Hudson deploy specific git revision

    - by brad
    I'm using hudson to auto-deploy my Rails app to heroku. In my main build job I pull from a Git repo (hosted using gitosis on the same machine), master branch with the following: URL of repository: /home/git/repositories/my_app.git Name of repository: origin Refspec: +refs/heads/master:refs/remotes/origin/master Branches to build: master Then, assuming all tests pass, I want to kick off a new build that is the deploy to Heroku. I can't however figure out how to get that deploy build to checkout the particular revision that this build was using. I understand there's a parameterized trigger plugin that would allow me to pass this revision number, but I don't know how I can tell hudson to checkout this particular revision on the deploy build. I'm pretty sure this just has to do with my limited knowledge of git, but where in the hudson git config's is there an option to checkout a particular revision? Otherwise, I could have many commits happen whilst a build is happening, and when it kicks off a deploy build, that deploy build would just check out the HEAD of the branch, which may not be the same as the code that was pushed that triggered this build. I don't fully understand why I have a refspec in Hudson, then also specify a branch to build, I thought this was the same thing. Can refspec somehow specify the revision number? How would this be referenced if it was passed through with the parameterized trigger plugin? (I've never used that plugin, but someone else recommended it as a way to pass in vars to a new build, if there's another way I'm all ears)

    Read the article

  • Understanding encryption Keys

    - by claws
    Hello, I'm really embarrassed to ask this question but its the fact that I don't know anything about encryption. I always avoided it. I don't understand the concept of encryption keys (public key, private key, RSA key, DSA key, PGP Key, SSH key & what not) . I did encounter these in regular basis but as I said I always avoided them. Here are few instances where I encountered: Creating Account: A public RSA or DSA key will be needed for an account. Send the key along with your desired account name to [email protected] I really don't know what are RSA/DSA or How to get their keys? Do I need to register some where for that? Mailing: I'm unable to recall exactly but I've seen some mails have some attachments like signature or the mail footer will have something called PGP signature etc.. I really don't get its concept. GIT Version control: I created account in assembla.com (for private GIT repo) and it asked me to enter "SSH keys" to my profile. Where am I gonna get these? Why do I need it? Isn't SSH related to remote login (like remote desktop or telnet)? How are these two SSHs related & differ? I don't know in how many more situations I'm going to encounter these things. I'm really confused and have no clue about where to start & how to proceed to learn these things. Kindly someone point me in correct direction. Note: I've absolutely zero interested in encryption related topics. So, there is no way I'm going to read a graduate level book on this subject. I just want to clear my concepts without going into much depth.

    Read the article

  • SendMail not working in CentOs 6.4

    - by Kane
    I am trying to send e-mails from my CentOS 6.4 but it does not work. My knowledge about servers is quite limited, so I hope someone can help me. Here is what I did: First i tried to send an email using the "mail" command, but it was not in the OS so I installed it. # yum install mailx After that, I tried sending an email using the "mail" command, but it did not send anything. I checked it on the internet and I realized I needed an e-mail server like sendmail, so I installed it. # yum install sendmail sendmail-cf sendmail-doc sendmail-devel After that, I configured it following some tutorials. First, sendmail.mc file. # vi /etc/mail/sendmail.mc Commented out the next line: BEFORE # DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl AFTER # dnl DAEMON_OPTIONS('Port=smtp, Name=MTA') dnl Check that the next lines are correct: # FEATURE(`virtusertable', `hash -o /etc/mail/virtusertable.db')dnl # ... # FEATURE(use_cw_file)dnl # ... # FEATURE(`access_db', `hash -T<TMPF> -o /etc/mail/access.db')dnl Update sendmail.cf # m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf Open the port 25 adding the proper line in the iptables file # vi /etc/sysconfig/iptables # -A INPUT -m state --state NEW -m tcp --dport 25 -j ACCEPT restart iptables and sendmail # service iptables restart # service sendmail restart So i thought that would be ok, but when i tried: # mail '[email protected]' # Subject: test subject # test content #. I checked the mail log: # vi /var/log/maillog And that is what I found: Aug 14 17:36:24 dev-admin-test sendmail[20682]: r7D8RItS019578: to=<[email protected]>, ctladdr=<[email protected]> (0/0), delay=1+00:09:06, xdelay=00:00:00, mailer=esmtp, pri=2460500, relay=alt4.gmail- smtp-in.l.google.com., dsn=4.0.0, stat=Deferred: Connection timed out with alt4.gmail-smtp-in.l.google.com. I do not understand why there is a connection time out. Am I missing something? Can anyone help me, please? Thank you.

    Read the article

  • How do I reinitialise a failed RAID 5 drive using terminal on Ubuntu Server

    - by Stephen
    I've currently put together a new system and part of that has been creating a software RAID 5 using 'mdadm' in Ubuntu Server. I successfully got to the point where I create the array using: sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=4 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 I left it to do its thing overnight then used the following command to check on it: watch cat /proc/mdstat To which the following was returned: Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid5 sdd1[4](S) sdc1[2] sdb1[1] sda1[0](F) 5860535808 blocks super 1.2 level 5, 512k chunk, algorithm 2 [4/2] [_UU_] unused devices: <none> It appears that one has failed (and I'm not too savvy with why another is a spare). So, just to be sure that something else isn't amiss I wanted to try and re-engage the failed drive. Can someone explain how I can do that and what I should do with the spare (if anything). And also how do I know when synchronisation is complete? The tutorial I used to get this far is located here: http://sonniesedge.co.uk/2009/06/13/software-raid-5-on-ubuntu-904/ Many thanks! p.s. Here is some extra information that may help: sudo mdadm --detail /dev/md0 /dev/md0: Version : 1.2 Creation Time : Mon Jun 18 21:14:21 2012 Raid Level : raid5 Array Size : 5860535808 (5589.04 GiB 6001.19 GB) Used Dev Size : 1953511936 (1863.01 GiB 2000.40 GB) Raid Devices : 4 Total Devices : 4 Persistence : Superblock is persistent Update Time : Mon Jun 18 21:50:26 2012 State : clean, FAILED Active Devices : 2 Working Devices : 3 Failed Devices : 1 Spare Devices : 1 Layout : left-symmetric Chunk Size : 512K Name : myraidbox:0 (local to host myraidbox) UUID : a269ee94:a161600c:fb1665e7:bd2f27b3 Events : 13 Number Major Minor RaidDevice State 0 0 0 0 removed 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 0 0 3 removed 0 8 1 - faulty spare /dev/sda1 4 8 49 - spare /dev/sdd1

    Read the article

  • MacBook Pro and Backtrack 5R1 Configuration

    - by user119346
    I have a Macbook pro Quad core (2.2/8gb ram/750gb hdd). I have went through tons of forums on the Internet, but none of them seemed to be updated for the current Backtrack 5R1, or the question of getting it to correctly work on the MBP. Can anyone help? I don’t have a USB Dongle, and I want to be able to use the internal airport extreme wireless of the MBP to use BT 5R1. I have downloaded Backtrack 5R1 onto VMWare Fusion, and got it up and running, but to no avail. It keeps recognizing my card as a Ethernet connection. Kismac wont recognize the card either. So what I am asking for is this: The proper “download method.” for Backtrack 5R1 to my MacBook Pro. (YES I AM WILLING TO RE-DOWNLOAD BT 5R1). The Complete process from start to finish, UP TO DATE, from someone who has done this using an MBP Running Lion OSX. The proper tweaks, settings, or commands to get my airport extreme wireless card to work (it is BROADCOM 4331 I think). The wireless connection I need to use the tools on both Backtrack 5R1 and Kismac. I mainly need to test WEP cracking on my network for security. The difference between running BT 5R1 on VMWARE Fusion and running from downloading it directly to the MBP, if there is, How to download it directly to the MBP?

    Read the article

  • SQL Server becomes slow after restart

    - by Tobi DM
    I already posted this one on stackoverflow but someone gave me the hint to that I might have more luck on serverfault. We use SQL Server 2005 on an Windwos Server 2008. Ther Server has 48 GB RAM. SQL Server is configured to use 40 GB RAM. There is only one database hosted (About 70 GB). The only app beside SQL Server is our App-Server which connects the clients to the database. Now we encounter the following problem: After a restart of the server our the performance is great. The server grabs the 40 GB RAM wich it is allowed to and then runs fast as hell. But after about 4 weeks the system becomes slower and slower. The execution of statements (seen in the profiler) is raising slowly. But I cannot see that there is something going wrong on the server. CPU usage is at about 20% I/O also seems to be no Problem The process monitor does also not show that there are strange apps or something like that. Eventlog does also have no interessting messages No open transactions or blockings to see We do not use cursors in our app We tried already the following things without effect: Droped the cache by using the statements DBCC FreeProcCache DBCC FREESYSTEMCACHE('ALL') DBCC DropCleanbuffers Restarted the Appserver we are using. Restart the sql server service But nothing did help exept restarting the whole server. Any ideas?

    Read the article

  • My URL has been identified as a phishing site

    - by user2118559
    Some months before ordered VPS at Ramnode According to tutorial (ZPanelCP on CentOS 6.4) http://www.zvps.co.uk/zpanelcp/centos-6 Installed CentOS and ZPanel) Today received email We are requesting that you secure and investigate the phishing website identified below. This URL has been identified as a phishing site and is currently involved in identity theft activities. URL: hxxp://111.11.111.111/www.connet-itunes.fr/iTunesConnect.woasp/ //IP is modified (not real) This site is being used to display false or spoofed content in an apparent effort to steal personal and financial information. This matter is URGENT. We believe that individuals are being falsely directed to this page and may be persuaded into divulging personal information to a criminal, if the content is not immediately disabled. Trying to understand. Some hacker hacked VPS, placed some file (?) with content that redirects to www.connet-itunes.fr/iTunesConnect.woasp? Then questions 1) how can I find the file? Where it may be located? url is URL: hxxp://111.11.111.111/ IP address, not domain name 2) What to do to protect VPS (with CentOS)? Any tutorial? Where may be security problem? I mean may be someone faced something similar....

    Read the article

  • postfix takes 60-90ms to queue email -- normal?

    - by Jeff Atwood
    We're seeing some (maybe?) strange delays when submitting individual emails to our local Postfix server. To help diagnose the issue, I wrote a little test program which sends 5 emails: get smtp 1ms ( 1 ms) email 0 677ms (676 ms) email 1 802ms (125 ms) email 2 890ms ( 88 ms) email 3 973ms ( 83 ms) email 4 1088ms (115 ms) Discounting the handshaking in the first email, that's about 90ms per email. These timings have also been corroborated with another test app written by someone else using a different codepath, so it appears to be server related. I turned on detailed logging and I can see that the delay is between the end of message \r\n\r\n and the receive: [16:31:29.95] [SEND] \r\n.\r\n [16:31:30.05] [RECV] 250 2.0.0 Ok: queued as B128E1E063\r\n [16:31:30.08] [SEND] \r\n.\r\n [16:31:30.17] [RECV] 250 2.0.0 Ok: queued as 4A7DE1E06E\r\n [16:31:30.19] [SEND] \r\n.\r\n [16:31:30.27] [RECV] 250 2.0.0 Ok: queued as 68ACC1E072\r\n [16:31:30.28] [SEND] \r\n.\r\n [16:31:30.34] [RECV] 250 2.0.0 Ok: queued as 7EFFE1E079\r\n [16:31:30.39] [SEND] \r\n.\r\n [16:31:30.45] [RECV] 250 2.0.0 Ok: queued as 9793C1E07A\r\n The time intervals tell the story (discounting the handshaking required for the initial email) -- each email is waiting about 60-90 milliseconds for postfix to queue! This seems .. excessive .. to me. Is it "normal" for postfix to take 60-90 ms for every email you send it? Or do I just have unreasonable expectations? I would expect the local postfix server to queue the email in about 20ms, tops!

    Read the article

  • Automated Linux VMs on Hyper-V 2012

    - by Mick
    I have a requirement to create a ton of linux VMs for our customers (we run managed infrastructure) on Hyper-V 2012 in the coming months and I have an issue with automating it. Here is how I need it to work: User accesses their web page and creates a VM. VM is created with a unique IP and name User logs in over SSH I know Hyper-V quite well and can work with powershell and am a C# programmer so the development side of things is taken care of. I also know enough about Linux to be at least competent: I have used it on and off for a number of years but not done anything Enterprise-level with it. All this can be done easily by manual processes but I need to be able to script or program this to automate it as there could be hundreds of them being created but I don't know how. My first thought is to have a database with random-generated names and IPs already created but I don't know how to get a Linux VM to boot up and grab one from the database... I suppose a Kickstart script would take care of it but I don't know what to do from there. Here is what is bouncing around in my head: Create a std linux build. - Easy to do Someone clicks "Create VM" and I pull a name and IP from the database and write it to a kickstart script. - Easy to do I could then open the template VHDX file and copy in the script and then save it. - Not sure if possible User boots up new VM and the kickstart script gives it the name and IP I assigned it. My problem is that I don't know how to open a VHDX file and insert a kickstart script into it... can't figure it out. I am reaching here and this solution may be miles off... I am more used to creating Windows VMs with scripts and so on which i am more familiar with... any help would be appreciated. Thanks Mick

    Read the article

  • TCP Server Memory management: #Connections Vs. #Requests

    - by Andrew
    Given that, there is no theoretical limit to number of concurrent TCP connections a Windows 2008 server can handle. Only thing will happen is, with each connection there will be memory consumption in server. Unfortunately, memory is not unlimited (and I want to utilize only physical memory). For example, lets say we've 2GB server memory. Now there are two extreme cases: Case 1: If we've allocated 64KB buffer for each connection (only to receive incoming request), then 32768 connections can consume all the 2GB of memory. This will not leave any memory to queue/process incoming requests from those connections. Case 2: On the other hand, lets say a single (or very few) connections continuously keeps sending request buffers (for example, video streaming from one connection to other) and server cannot process them within time, those buffers will get piled up in server and eventually will occupy most of the servers memory. And it will not leave any memory for new connection thereafter. This is the real dilemma in server design bugging me badly for last many days. If I can decide on max size of request buffer per connection and max number of requests to allow in queue per connection. Then, based on available server memory, it will then automatically set limit on max number of concurrent connections. How to decide on these limits to achieve best performance and throughput? I am just looking for perfect utilization of server resources. Are there any standard guidelines or empirical data available with someone who can share with me please.

    Read the article

  • What could cause random files being downloaded without permission?

    - by Dustin
    I have been having issues lately with a certain directory. It seems someone is placing files into it, or something of that sort, and any attempt to delete them is successful, HOWEVER they reappear over time (maybe not the exact same ones, but random files). I will provide you the information I can and several pictures of my problem: sandbox.mys4l.com/visual/files/b1.jpg Files like this have been appearing in my /visual/ folder, and I have no clue where they are coming from. sandbox.mys4l.com/visual/files/b2.jpg This is what is inside on of those weird files, it appears to be nothing problematic. sandbox.mys4l.com/visual/files/b4.jpg As you can see, in the time it took me to take the first picture, more odd files showed up. These log files are also being uploaded to this directory, and I know I didn't put them there. sandbox.mys4l.com/visual/files/b7.jpg This inside one of these mysterious .log files, I'm not sure what it's all about. These files only appear to be going into this specific area, and I'm not sure of their origin, only that they will not go away. I have done a full system scan at least twice with an up-to-date virus scan, and have looked for an unknown script which may be writing them there. Nothing has come up, so I come to you guys as I hear this is the best place to find answers. Hope this problem has a solution!

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • apache: can't renew ssl certificate

    - by Caballero
    I have Godaddy SSL certificate for one website on my dedicated server running Centos 5.3 / Apache 2.2.3. I have renewed certificate on Godaddy recently, however now it's showing as expired on my website. I've re-keyed certificate since and reuploaded domain.key, domain.crt and bundle.crt (example file names) files to the server, restarted apache, but the sertificate still shows as expired. I'm running out of clues. I've tried replacing content of .crt files with jiberish and restart apache - it's still showing that certificate is expired, even though it shouldn't be picked up at all. I eventually rebooted dedicated server, still no luck. I'm using free SSL check tool http://www.digicert.com/help/ which clearly shows all the green checks except one - certificate is expired. Has someone any idea what might be causing this? Could there be some kind of caching going on here? UPDATE: after running openssl x509 -in domain.crt -noout -enddate I'm getting this output: notAfter=Jun 2 08:16:51 2013 GMT So I asume this means I have the right certificate on the server and yet the old expired one shows on the web...

    Read the article

  • Squid: The request or reply is too large

    - by Ueli
    I have done a reverse proxy with an Apache in the background (on the same server). All works great but I can't open one page. I get the error "The request or reply is too large." In my cache.log contains: 2010/12/09 15:28:29| WARNING: http.c:971: HTTP header too large 2010/12/09 15:29:03| ctx: enter level 0: 'http://server/admin/cms/nav' 2010/12/09 15:29:03| httpProcessReplyHeader: Too large reply header 2010/12/09 15:29:03| ctx: exit level 0 In my squid.conf i disabled the limitations of the request and reply header, without success: reply_body_max_size 0 allow all request_body_max_size 0 Does someone know why that don't work? Thank you very much. Squid Version: Squid Cache: Version 2.7.STABLE3 configure options: '--prefix=/usr' '--exec_prefix=/usr' '--bindir=/usr/sbin' '--sbindir=/usr/sbin' '--libexecdir=/usr/lib/squid' '--sysconfdir=/etc/squid' '--localstatedir=/var/spool/squid' '--datadir=/usr/share/squid' '--enable-async-io' '--with-pthreads' '--enable-storeio=ufs,aufs,coss,diskd,null' '--enable-linux-netfilter' '--enable-arp-acl' '--enable-epoll' '--enable-removal-policies=lru,heap' '--enable-snmp' '--enable-delay-pools' '--enable-htcp' '--enable-cache-digests' '--enable-underscores' '--enable-referer-log' '--enable-useragent-log' '--enable-auth=basic,digest,ntlm,negotiate' '--enable-negotiate-auth-helpers=squid_kerb_auth' '--enable-carp' '--enable-follow-x-forwarded-for' '--with-large-files' '--with-maxfd=65536' 'amd64-debian-linux' 'build_alias=amd64-debian-linux' 'host_alias=amd64-debian-linux' 'target_alias=amd64-debian-linux' 'CFLAGS=-Wall -g -O2' 'LDFLAGS=' 'CPPFLAGS='

    Read the article

  • Weird caching bug where old version of the same web page (same filename) is still called (Windows 2008 R2, Tomcat 5.5)

    - by user717236
    This is definitely one of the strangest errors I've seen and it occurs intermittently. I am running Windows 2008 R2, IIS 7.5, and Apache Tomcat 5.5, by the way. Let's say I have two machines, A and B. Both A and B are running Windows 2008 R2. I have a web page called login.jsp on machine A, and I have a newer, modified version of login.jsp on machine B . Now, I copy the new login.jsp from machine B and paste it to machine A, replacing the older version with the same filename. For whatever reason, when I hit up the web page in my browser from a local machine (i.e. my laptop), it still recalls the old version of the web page, even though it's been replaced! I tried restarting IIS and Apache Tomcat. That didn't work. I tried restarting machine A and that didn't work. I tried a cold reboot of my local machine and that didn't work, either. So, I spoke to someone I can confide in for help. He said to open the login.jsp page in notepad, put a space in, save the file, and try again. Sure enough, it worked. He said he hasn't seen it in Windows 2003, but this is occurring with Windows 2008. What I don't understand is why did it work and what the heck is this error and I do I really diagnose it and resolve it for good, instead of the hack my colleague proposed? Is this bug related to Windows 2008, Windows 2008 R2, Tomcat, or something else entirely? Anyone else have the same problem? Thank you for any help.

    Read the article

  • OHS 11g R2 - How to restrict access only to Intranet users

    - by Pavan
    For one of the sub-paths, I am trying to restrict access only to Intranet originated requests. I tried following configuration, but it's not working as expected. <VirtualHost *:7777> Debug ON RewriteEngine On RewriteOptions inherit RewriteRule ^/$ /test1 [R,L] RewriteRule ^/test2$ - [R=404] [L] RewriteRule ^/stage$ /stage/test1 [R,L] RewriteRule ^/stage/test2$ - [R=404] [L] <IfModule weblogic_module> WebLogicCluster localhost:7003,localhost:7005 </IfModule> <Location /test1> SetHandler weblogic-handler </Location> <Location /test2> SetHandler weblogic-handler </Location> <Location /api> SetHandler weblogic-handler PathPrepend /test1 </Location> <Directory /stage/test1> Order deny,allow deny from all Allow from 192.168 Allow from 127 </Directory> <Directory /stage/test2> Order deny,allow deny from all Allow from 192.168 Allow from 127 </Directory> <Directory /stage/api> Order deny,allow deny from all Allow from 192.168 Allow from 127 </Directory> <Location /stage/test1> SetHandler weblogic-handler WebLogicCluster localhost:7203,localhost:7205 PathTrim /stage </Location> <Location /stage/test2> SetHandler weblogic-handler WebLogicCluster localhost:7203,localhost:7205 PathTrim /stage </Location> <Location /stage/api> SetHandler weblogic-handler WebLogicCluster localhost:7203,localhost:7205 PathTrim /stage PathPrepend /test1 </Location> </VirtualHost> Can someone please help me resolving this?

    Read the article

  • How to set up mysql storage for certain rsyslog input matches?

    - by ylluminate
    I'm draining various logs from Heroku to an rsyslog linux (ubuntu) server and am starting to have a little more to bite off than I can chew in terms of working with my log histories. I am needing to be able drill back in time based on more flexible details and more flexible access than what the standard syslog file(s) provide. I'm thinking that logging to mysql may be the correct approach, but how do I set this up such that it pulls only certain log entries into a table based on an identified? For example, I see a long hex string identifying each log entry from a certain Heroku app instance. I assume that I can just pipe those into the mysql socket vs ALL rsyslog input into mysql... Could someone please direct me to a resource that can walk me through the process of setting something like this up or simply provide some details that can help? I have 15+ years of Unix experience so I just need some nudging in the right direction as I've not really done a tremendous amount of work with syslog daemons previously in terms of pooling various servers into one. Additionally, I'd be interested in any log review tools that could make drilling through log arrangements like this more handy for developers.

    Read the article

  • IIS 7.5 returning 404 for unknown host names

    - by WaldenL
    This just doesn't seem correct to me, so I'm looking for someone to tell me how I've misconfigured IIS... Configuration is IIS7.5 (2008R2), without SP1. I have IIS 7.5 configured w/several sites. ALL sites have hostnames defined in the bindings, there is NO site w/out a hostname. However, if I request an unknown hostname from the server IIS (technically Microsoft-HTTPAPI/2.0) return a 404 error, not a 400 error. I would expect a 400 (or some other major error) rather than a lowly 404. This causes a problem when I have nginx in front of multiple IISs and want to stop a site so nginx takes it out of rotation. Since IIS still returns a 404 for the request even when there is no active site for that name, nginx doesn't know the server is dead. NB: IIS returns the 404 regardless of whether there is a server, but it's stopped, or there is no server. Thoughts? Solutions? -- Additional info: OK, I added a site on a port other than 80 (5000) and then on a connection to that port asked for a site that doesn't exist, and I get the expected error 400 (Invalid hostname). So, while IIS isn't listening for generic (no host name) connections on port 80 it would seem that something is. Any ideas how to get HTTPSys to dump the list of what it's listening for?

    Read the article

  • iptables to block VPN-traffic if not through tun0

    - by dacrow
    I have a dedicated Webserver running Debian 6 and some Apache, Tomcat, Asterisk and Mail-stuff. Now we needed to add VPN support for a special program. We installed OpenVPN and registered with a VPN provider. The connection works well and we have a virtual tun0 interface for tunneling. To archive the goal for only tunneling a single program through VPN, we start the program with sudo -u username -g groupname command and added a iptables rule to mark all traffic coming from groupname iptables -t mangle -A OUTPUT -m owner --gid-owner groupname -j MARK --set-mark 42 Afterwards we tell iptables to to some SNAT and tell ip route to use special routing table for marked traffic packets. Problem: if the VPN failes, there is a chance that the special to-be-tunneled program communicates over the normal eth0 interface. Desired solution: All marked traffic should not be allowed to go directly through eth0, it has to go through tun0 first. I tried the following commands which didn't work: iptables -A OUTPUT -m owner --gid-owner groupname ! -o tun0 -j REJECT iptables -A OUTPUT -m owner --gid-owner groupname -o eth0 -j REJECT It might be the problem, that the above iptable-rules didn't work due to the fact, that the packets are first marked, then put into tun0 and then transmitted by eth0 while they are still marked.. I don't know how to de-mark them after in tun0 or to tell iptables, that all marked packet may pass eth0, if they where in tun0 before or if they going to the gateway of my VPN provider. Does someone has any idea to a solution? Some config infos: iptables -nL -v --line-numbers -t mangle Chain OUTPUT (policy ACCEPT 11M packets, 9798M bytes) num pkts bytes target prot opt in out source destination 1 591K 50M MARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 MARK set 0x2a 2 82812 6938K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 owner GID match 1005 CONNMARK save iptables -nL -v --line-numbers -t nat Chain POSTROUTING (policy ACCEPT 393 packets, 23908 bytes) num pkts bytes target prot opt in out source destination 1 15 1052 SNAT all -- * tun0 0.0.0.0/0 0.0.0.0/0 mark match 0x2a to:VPN_IP ip rule add from all fwmark 42 lookup 42 ip route show table 42 default via VPN_IP dev tun0

    Read the article

  • Does this exist: a standardized way of documenting a file-system structure

    - by eegg
    At work, I'm in charge of maintaining the organization of a whole lot of varied data on a standard file-system. Part of this is coming up with sensible classification (by similarity, need, read/write access, etc), but the bigger part is actually documenting it: what documents/files/media should go where, what should not be in this directory, "for something slightly different, see ../../other-dir", etc. At the moment, I've documented this using a plaintext file filing.txt in every directory I want to document. If someone is unsure what's meant to be in any directory, they read that file. This works alright, but it seems odd that I have this primitive custom solution to a problem that any maintainer of a non-trivial directory structure must experience. Every company I've known of, for example, has some kind of shared file-system where agreed terminology for categorization is important. In my experience, people just have to learn what's what by trial-and-error and experimentation. So allow me to propose a better solution, and hopefully you can tell me if it exists. Any directory on any filesystem can have a hidden plaintext file named .filing. Its contents are descriptive human language. It uses some markup like Markdown, with little more than bold, italic, and (relative) hyperlinks to other directories. Now a suitably-enabled file browser will check for a file named .filing whenever it displays a directory. If it exists, its contents are parsed and displayed in an unobtrusive pane near the directory-path widget. Any links therein can be clicked, and the user will be taken to the target directory of that link. I think that the effort of implementing such a standard would pay back many times over in usability gains. We would have, say, plugins for Nautilus, Konqueror, etc.. It could be used to display directory information in the standard file lists served by webservers. And so on. So, question: does such a thing exist? If not, why not? Do people think it's a worthwhile idea?

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

< Previous Page | 520 521 522 523 524 525 526 527 528 529 530 531  | Next Page >