Search Results

Search found 32492 results on 1300 pages for 'reporting database'.

Page 914/1300 | < Previous Page | 910 911 912 913 914 915 916 917 918 919 920 921  | Next Page >

  • Block spam by using geoip filter?

    - by faultyserver
    We are looking for a way to be able to block spam based on geographic location by filtering using geoip. context: we rarely have any email correspondence outside of the USA, so we would like to block all incoming email outside the US except for maybe one or two countries. After a little Googling I have found a couple of solutions that may work (or not), but I would like to know what other sysadmins are currently doing or what they would recommend as a solution. Here is what I have found so far: Using PowerDNS and its GeoIP backend it is possible to use geoip for filtering. Normally this backend is used to help distribute load as a kind of load balancing but I dont see why it couldnt be used to kill spam as well? Possibly use the Maxmind lite country database and some scripting to do a similar job. Ideally what I am looking for is a solution that would handle decent load and scale well too...aren't we all! ;) Thanks in advance for your help! :-)

    Read the article

  • What kind of server do I need to handle 10 million requests and mySQL queries a day?

    - by Calvin
    I'm a new bie of server administration and I'm looking for a powerful hosting service to host my new website. This website is basically a back-end of an mobile online game, and it will: handle up to 10 million of HTTPS request and mySQL queries a day store up to 2000 GB file on the hard disk transfer probably 5000 GB data in and out per month it runs on PHP and mySQL have 10 million records in mySQL database, for each record there are 5-10 fields, around 100 bytes each I really don't know what kind of a server I need to handle these requirements, my question is: what cpu/ram do I need for a dedicated server or vps? what hosting companies are able to offer this kind of dedicated server or VPS? what about cloud computing? I've researched Amazon EC2 but it seems complicated to me. And I've contacted Rackspace but strangely they said Cloudsites is not suitable for my requirements. I wonder if there is other cloud hosting company. any other alternative method? thanks very much!

    Read the article

  • Windows Handling Piped Comands Error Redirection

    - by jpmartins
    Warning: I am no expert on building scripts, and sorry for lousy English. In an case of generating a CSV from a database query I'm using the following commands. ... CALL java.exe -classpath ... com.xigole.util.sql.Jisql -user dmfodbc -pf pwd.file -driver com.sybase.jdbc3.jdbc.SybDriver -cstring %constr% -c ; -input 42.sql -formatter csv -delimiter ; 2%LOGFILE% | CALL grep -v -e "SELECT right" -e "executing: " -e " rows affect" %FicheiroR% 2%LOGFILE% ... I'm using windows implementation of grep. The 2%LOGFILE% in both java and grep command is causing an error message indicating the file is being use by another process. The Ugly workaround i have came up with is to put grep error redirect to a temporary %LOGFILE%.aux java ... | grep ... 2%LOGFILE%.aux type %LOGFILE%.aux % %LOGFILE% del %LOGFILE%.aux What is a better solution?

    Read the article

  • MySQL Privileges required to GRANT EVENT, EXECUTE, LOCK TABLES, and TRIGGER

    - by Brad
    I have an account, user_a, and I would like to grant all available permissions on some_db to user_b. I have tried the following query: GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SELECT, SHOW VIEW, TRIGGER, UPDATE ON `some_db`.* TO 'user_b'@'%' WITH GRANT OPTION The result: Access denied for user 'user_a'@'%' to database 'some_db' Some experimentation has shown me that the only permissions my account (user_a) is unable to grant are EVENT, EXECUTE, LOCK TABLES, and TRIGGER. What privileges are required for my account to GRANT these privileges to another user? If I run SHOW GRANTS, I get this output: "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, SHOW DATABASES, SUPER, CREATE TEMPORARY TABLES, REPLICATION SLAVE, REPLICATION CLIENT, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, CREATE USER ON *.* TO 'user_a'@'%' IDENTIFIED BY PASSWORD '1234567890abcdef' WITH GRANT OPTION" "GRANT SELECT, INSERT, UPDATE, DELETE, EXECUTE ON `some_other_unrelated_db`.* TO 'user_a'@'%'" "GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE ROUTINE, ALTER ROUTINE ON `another_unrelated_db`.* TO 'user_a'@'%' WITH GRANT OPTION"

    Read the article

  • The MAPI call 'OpenMsgStore' failed: The MAPI provider failed Exchange 2003

    - by realitnzsam
    Hi guys, Recently we moved our Exchange 2003 (SP2) database from one drive to another. Now every other day or so we get errors coming up in the event log: Event Type: Error Event Source: MSExchangeSA Event Category: MAPI Session Event ID: 9175 Date: 10/03/2010 Time: 8:06:15 a.m. User: N/A Computer: SERVER Description: The MAPI call 'OpenMsgStore' failed with the following error: The attempt to log on to the Microsoft Exchange Server computer has failed. The MAPI provider failed. Microsoft Exchange Server Information Store ID no: 8004011d-0512-00000000 For more information, click http://www.microsoft.com/contentredirect.asp. Restarting the Exchange Information Store fixes this instantly, but until we do it Outlook won't connect to Exchange and Blackberry emails aren't pushing out.

    Read the article

  • What is your favorite password storage tool?

    - by Marcel Levy
    Aside from personal passwords, I'm always juggling a number of project-specific passwords, including those for network, web and database authentication. Some authentication can be managed with ssh keys and the like, but everywhere I've worked I also faced the need for the management of passwords that need to be available to a number of different people. So what do you use, either for personal or team-based password management? Personally I'd like to hear about cross-platform tools, but I'm sure other people would be satisfied with Windows-only solutions. I know the stackoverflow podcast tackled this issue in #7 and #9, but I'm hoping we can come up with the definitive answer here. Update: Even though this question was asked before its sibling site existed, you should probably add your two cents to the more active question over at superuser, which is a more appropriate venue for this.

    Read the article

  • Command line safety tricks

    - by deadprogrammer
    Command line and scripting is dangerous. Make a little typo with rm -rf and you are in a world of hurt. Confuse prod with stage in the name of the database while running an import script and you are boned (if they are on the same server, which is not good, but happens). Same for noticing too late that the server name where you sshed is not what you thought it was after funning some commands. You have to respect the Hole Hawg. I have a few little rituals before running risky commands - like doing a triple take check of the server I'm on. Here's an interesting article on rm safety. What little rituals, tools and tricks keeps you safe on the command line? And I mean objective things, like "first run ls foo*, look at the output of that and then substitute ls with rm -rf to avoid running rm -rf foo * or something like that", not "make sure you know what the command will do".

    Read the article

  • Connect to MS Sql 2008 on local VM

    - by Campo
    I have a test machine. Server 2008 with Hyper V MSSQL 2008 Enterprise Lets call it MACHINE A on the VM it is as well Server 2008 with another MSSQL 2008 Ent Call it VM B I setup a DB on MACHINE A then backed it up and restored following the prepare database for mirroring instructions on MSDN onto VM B. I used to be able to connect to the VM B Instance from the main test server (MACHINE A) but now I cannot for some reason. It cannot seem to find the instance at all even when I browse network databases. I can ping the VM from any computer on the network and access its shares so I know it is discoverable. Just the end of a long day maybe I am missing something here.

    Read the article

  • Linux and Windows Server Setup

    - by Brian
    Hello, I have an win 2008 R2 machine (a home machine of mine) that I am messing around with and learning the server technologies. I also wanted to try out oracle, and was wondering if its possible to setup a LINUX machine with Oracle, and have the two interoperate. What I mean by that is if I setup the server and my laptop on a domain, would it be possible to communicate to that LINUX machine and thus the Oracle database, and if so, are there any good resources on the setup? I was going to create a LINUX hyper v virtual... Any tips appreciated. Thanks.

    Read the article

  • MySQL doesn't talk to PHP anymore (EasyPHP)

    - by Matt Ellen
    I've just upgraded from Windows XP to Windows 7 (64 bit) I was using EasyPHP 5.3.1 to develop my website, but since I've upgraded I can't get PHP to talk to MySQL. Even the PHPMyAdmin page doesn't load. I've tried installing the latest 64bit version of MySQL in place of the supplied version of MySQL, but that hasn't helped. The queries just don't seem to reach MySQL. I have verified that the DB for my database works by running mysql on the command line. PHPMyAdmin doesn't display an error, just a blank page. The error coming up from my website is: Warning: PDO::__construct() [pdo.--construct]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in E:\services\EasyPHP-5.3.1\www\IdeaWeb\classes\Security.inc on line 14 Fatal error: Maximum execution time of 60 seconds exceeded in E:\services\EasyPHP-5.3.1\www\IdeaWeb\classes\Security.inc on line 0 Does anyone know how to solve this? (i.e. get MySQL talking to PHP.)

    Read the article

  • Using a PivotTable to Count Items in Access

    - by Sandra
    I have a list of text entries and I want to count how often each entry appears in the list. e.g. Berlin Paris London London Paris Paris Paris The result would be Berlin 1 Paris 4 London 2 This result easy do to achieve with an pivot table in MS Excel (see: Count Items in Excel). My data not in spreadsheet in Excel but in a MS Access database table. So in order to avoid constant switching between Access and Excel and I would like to handle everything in Access (either Access 2007 or 2010). I know there are pivot tables in Access and I know how to display one, but I was unable to find out how to count the number of occurrences. Thank you!

    Read the article

  • Tool to maintain/keep track of filesystem content integrity?

    - by Jesse
    I'm looking for a tool to maintain the integrity of a filesystem and it's contents using checksums. Effectively storing a list of checksums/filename pairs somewhere on the filesystem in a way that can be verified later if files are somehow damaged or lost. Git does what I want, but because it stores the contents of every file in it's object database, the disk usage will at least double. And the fact that it does not provide a progress bar when scanning files tells me it was not designed for the multi-terabyte filesystem I have in mind. I can do this crudely by storing the output of md5deep, but is there a tool specifically designed for this purpose, using whatever smarts possible to make the process efficient?

    Read the article

  • rm on a directory with millions of files

    - by BMDan
    Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir)I tried this first. I gave up and killed it after it had run for a day and a half without any discernible impact. unlink(2) on the directory: Definitely worth consideration, but the question is whether it'd be faster to delete the files inside the directory via fsck than to delete via unlink(2). That is, one way or another, I've got to mark those inodes as unused. This assumes, of course, that I can tell fsck not to drop entries to the files in /lost+found; otherwise, I've just moved my problem. In addition to all the other concerns, after reading about this a bit more, it turns out I'd probably have to call some internal FS functions, as none of the unlink(2) variants I can find would allow me to just blithely delete a directory with entries in it. Pooh. while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; done ) This is actually the shortened version; the real one I'm running, which just adds some progress-reporting and a clean stop when we run out of files to delete, is: export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Now, for the questions: As mentioned above, is the per-directory entry limit tunable? Why did it take "real 7m9.561s / user 0m0.001s / sys 0m0.001s" to delete a single file which was the first one in the list returned by "ls -U", and it took perhaps ten minutes to delete the first 10,000 entries with the command in #3, but now it's hauling along quite happily? For that matter, it deleted 260,000 in about thirty minutes, but it's now taken another fifteen minutes to delete 60,000 more. Why the huge swings in speed? Is there a better way to do this sort of thing? Not store millions of files in a directory; I know that's silly, and it wouldn't have happened on my watch. Googling the problem and looking through SF and SO offers a lot of variations on "find" that obviously have the wrong idea; it's not going to be faster than my approach for several self-evident reasons. But does the delete-via-fsck idea have any legs? Or something else entirely? I'm eager to hear out-of-the-box (or inside-the-not-well-known-box) thinking. Thanks for reading the small novel; feel free to ask questions and I'll be sure to respond. I'll also update the question with the final number of files and how long the delete script ran once I have that. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours.

    Read the article

  • How to Enable geoip on magento with varnish page cache

    - by molleman
    I currently have 3 stores online with 3 different domains, running magento with Apache and varnish (using Phoenix page cache extension) running on centos One store is for uk, another for Ireland and another for USA Trouble is (Example) If an US user hits the uk store , I would like the user to be notified to go to the correct store on the page, (I do not want them automatically redirected) I was able to php-pecl-geoip with maxmind database to get this to work, but as users on my website have increased I had to begin using varnish. how could I implement this functionality on with varnish so I know what country the user is from so I can display a message to the user to view their relevant website?

    Read the article

  • Switching from Amazon EC2 instance-store to EBS Volume

    - by Adam
    Hi, I have an Amazon EC2 instance that is using an instance-store as its root device. It has no EBS volumes attached to it. It has a database and a running web application on it. If I understand correctly this is a bad setup as I would lose all the data on the instance if it were to reboot. I would like to correct this mistake. I'd like to move all the data on the running instance to a new EBS volume and make that new volume the root device. How do I go about doing this? Thanks!

    Read the article

  • How many users are "many users"?

    - by kemp
    I need to find a solution for a website which is struggling under load. The site gets ~500 simultaneous connections during peak time, and counts around 42k hits per day. It's a wordpress based site bridged with a vbulletin forum with a lot of contents and a fairly complex structure which makes intensive use of the database. I already implemented code level full page caching (without this the server just crashes), and configured all other caching directives as well as combining css files and the like to limit http requests as much as possible. I need to understand if there is more that can be done via software or if the load is just too much for the server to handle and it needs to be upgraded, because the server goes down occasionally during peak times. Can't access the server now, but it's a dedicated CentOS machine (I think 4GB ram, can't say what CPU) running apache/mysql. So back to the main question: how can I know when the users are just too many?

    Read the article

  • Bash script to keep last x number of files and delete the rest

    - by Brady
    I have this bash script which nicely backs up my database on a cron schedule: #!/bin/sh PT_MYSQLDUMPPATH=/usr/bin PT_HOMEPATH=/home/philosop PT_TOOLPATH=$PT_HOMEPATH/philosophy-tools PT_MYSQLBACKUPPATH=$PT_TOOLPATH/mysql-backups PT_MYSQLUSER=********* PT_MYSQLPASSWORD="********" PT_MYSQLDATABASE=********* PT_BACKUPDATETIME=`date +%s` PT_BACKUPFILENAME=mysqlbackup_$PT_BACKUPDATETIME.sql.gz PT_FILESTOKEEP=14 $PT_MYSQLDUMPPATH/mysqldump -u$PT_MYSQLUSER -p$PT_MYSQLPASSWORD --opt $PT_MYSQLDATABASE | gzip -c > $PT_MYSQLBACKUPPATH/$PT_BACKUPFILENAME Problem with this is that it will keep dumping the backups in the folder and not clean up old files. This is where the variable PT_FILESTOKEEP comes in. Whatever number this is set to thats the amount of backups I want to keep. All backups are time stamped so by ordering them by name DESC will give you the latest first. Can anyone please help me with the rest of the BASH script to add the clean up of files? My knowledge of BASH is lacking and I'm unable to piece together the code to do the rest.

    Read the article

  • Is it OK for the top level domain .FM not to provide the whois server? [closed]

    - by Igor
    just a question in the title Is it OK for the top level domain .FM not to provide the whois server? Are there any other TLD that behaves in the same manner? The question came out of the whois command answer $ whois dot.fm This TLD has no whois server, but you can access the whois database at http://www.dot.fm/whois.html EDIT Sorry for the "OK" in the question, yes it is quite vague. We're desiring to acquire the domain name in *.fm and my worry was about to be more suspected for the antispam filters and other services relying on the DNS and so. Is this observation have sense or not?

    Read the article

  • Try exchange in real domain

    - by AndreaCi
    We (as a company) 'd like to try exchange server to replace our mail server. I downloaded the demo version from Microsoft website, but during the installation it wants administrator access to domain to edit the Active Directory database structure. The test will last for (at least) a month to see if it will bring real advantages to our management systems. Here is my question: Is it "dangerous"? If I uninstall the exchange server everything will be reverted to previous state? I'm kind of "scared" about the changes he may apply to our domain controllers.

    Read the article

  • Which kerberos flavor?

    - by Michael Lowman
    So I'm setting up a small network with all the standard stuff (files, email, etc.) and I've decided to go with a Kerberos+LDAP solution. Any ideas or recommendations on Heimdal vs. MIT? I've used MIT before, and tangentially Heimdal, but I don't really know of any real reason for using one over the other. I just know that I'd prefer not to realize I'd rather be running MIT after getting the whole Heimdal up and running with a full user database. If any other info'd be useful, I'm happy to provide.

    Read the article

  • Postfix tutorial inconsistency

    - by Desmond Hume
    I'm following this tutorial to setup a Postfix/Dovecot mail server with Postfix Admin as a web front end. As regards directory structure for virtual mail users, the author of the tutorial writes: Virtual mail users are those that do not exist as Unix system users. They thus don't use the standard Unix methods of authentication or mail delivery and don't have home directories. That is how we are managing things here: mail users are defined in the database created by Postfix Admin rather than existing as system users. Mail will be kept in subfolders per domain and account under /var/vmail - e.g. [email protected] will have a mail directory of /var/vmail/example.com/me. But when he gives instructions about configuring Postfix Admin, he suggests this to be contained by Postfix Admin's config.inc.php: // Mailboxes // If you want to store the mailboxes per domain set this to 'YES'. // Examples: // YES: /usr/local/virtual/domain.tld/[email protected] // NO: /usr/local/virtual/[email protected] $CONF['domain_path'] = 'NO'; Is there an inconsistency?

    Read the article

  • SQL Server: Is it possible to prevent SQL Agent from failing a step on error?

    - by franklinkj
    I have a stored procedure that runs custom backups for around 60 SQL servers (mixes 2000 through 2008R2). Occasionally, due to issues outside of my control (backup device inaccessible, network error, etc.) an individual backup on one or two databases will fail. This causes this entire step to fail, which means any subsequent backup commands are not executed and half of the databases on a given server may not be backed up. On the 2005+ boxes I am using TRY/CATCH blocks to manage these problems and continue backing up the remaining databases. On a 2000 server however, for example, I have no way to prevent this error from failing the entire step: Msg 3201, Level 16, State 1, Line 1 Cannot open backup device 'db-diff(\PATH\DB-DIFF-03-16-2010.DIF)'. Operating system error 5(Access is denied.). Msg 3013, Level 16, State 1, Line 1 BACKUP DATABASE is terminating abnormally. I am simply asking if anything like this is possible in SQL 2000 or if I need to go in a completely different direction.

    Read the article

  • Why can't I grant exec on dbms_lock.sleep() OR create a procedure using it (but I can run it fine on its own)

    - by Richard Green
    I am trying to write a small bit of PL/SQL that has a non-CPU burning sleep in it. The following works in sqldeveloper begin dbms_lock.sleep(5); end; BUT (as the same user), I can't do the following: create or replace procedure sleep(seconds in number) is begin dbms_lock.sleep(seconds); end; without the error "identifer "DBMS_LOCK" must be declared... Funny as I could run it without a procedure. Just as strange, when I log in as a DBA, I can run the command grant exec on dbms_lock to public; and I get ERROR at line 1: ORA-00990: missing or invalid privilege This is oracle version "Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production"

    Read the article

  • SQL Server: One 12-drive RAID-10 array or 2 arrays of 8-drives and 4-drives

    - by ben
    Setting up a box for SQL Server 2008, which would give the best performance (heavy OLTP)? The more drives in a RAID-10 array the better performance, but will losing 4 drives to dedicate them to the transaction logs give us more performance. 12-drives in RAID-10 plus one hot spare. OR 8-drives in RAID-10 for database and 4-drives RAID-10 for transaction logs plus 2 hot spares (one for each array). We have 14-drive slots to work with and it's an older PowerVault that doesn't support global hot spares.

    Read the article

  • Need help configurating my Tomcat server without any WAR files

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

< Previous Page | 910 911 912 913 914 915 916 917 918 919 920 921  | Next Page >