Search Results

Search found 21053 results on 843 pages for 'out of process'.

Page 669/843 | < Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >

  • How to connect android client to the localhost of Apache server (php) inside my laptop?

    - by user1796310
    I'm trying to create android apps which able sending data through wifi connection to my laptop Apache Server and MySQL database. I use the samsung galaxy tab 10.1 as my mobile device. and the protocol i used is HttpGet or HttpPost. And i use XAMPP( with Apache& SQL) to do the server and process the php. But, due to android cannot detect adhoc network from laptop, i use Virtual Router ( for window 7) to create virtual access point and make the tablet able connect to my laptop. But the problem is: [1] in my apps (client-android), where the httpget or httppost to which url? localhost in my laptop- 127.0.0.1 or localhost in android 10.0.0.1? or the ip address of the virtual router? [2]so, if i want access from android to the localhost(laptop-Apache) to call the php to run? which port? which ip address /url that i need to put in android apps(httpget)? and do i need to modify anything in httpconfig for XAMPP? thanks alot.

    Read the article

  • Why am I getting programs stuck in log_wait_commit under Linux?

    - by staticsan
    There is something subtly wrong with my Linux install that I just can't locate. It is Ubuntu Lucid Lynx (10.04) 64-bit. Hardware is a Dell Optiplex 960: Intel Core 2 Quad CPU, 8Gb of RAM, 2x 300Gb HDDs. /home is ext2 on one disk and everything else is on the other (/ is also ext3). I have VirtualBox running a 64-bit Vista image for Outlook calendaring, but the heavyweight apps are IntelliJ, NetBeans, MySQL and Opera. Opera also loads my mail (IMAP) of which there is over 10,000 messages. The problem is that Opera stalls for a few seconds from time-to-time. Watching the process list shows it's in log_wait_commit which means (as far as I have figured out) the filesystem is holding things up. Sometimes I can make this happen by doing a subversion update, but usually it happens for no reason I can see. It usually happens to Opera, but I've seen NetBeans go under, too. It doesn't make the app crash - it's just completely unresponsive for a few seconds. Googling has not helped. The closest I got was to remove the sync attribute in the file system. This achieved nothing. On the advice of a Linux guru friend, I lowered /proc/sys/vm/dirty_writeback_centisecs to 300, but that didn't do anything, either. And it was all he could think of. What is going on and can I fix it? (And how?)

    Read the article

  • Server taking too long to respond error

    - by DCJones
    Hi, This is my first post on serverFault and my first entry in to web server configuration. The hardware and software. CPU: GenuineIntel, Intel(R) Core(TM)2 Duo CPU E7500 @ 2.93GHz OS: Linux 2.6.18-128.el5 Memory: 2Gb Background. I am running a small database (MySQL), around 1000 records with each record containing 44 fields. At the start of each day “00:01” the tables are cleared and populated with fresh data. The are 10 remote PCs all running Winodws XP and Firefox internet browser. All remote PC’s are connected to the internet using a min 4Gb broadband connection. Each remote PC runs a URL which displays a dynamic page of data which is refreshed every 20 seconds. This is a continual process 24 hours a day. I problem I am having is on odd occasions throughout the day the PC browser error with “Server taking too long to respond error”. What I am trying to find our is if I have the correct setting in the httpd.conf file on the server. Any help or advice anyone can provide would be very helpful. Best regards Dereck Server config file: httpd.conf ServerRoot "/etc/httpd" PidFile run/httpd.pid Timeout 120 KeepAlive On MaxKeepAliveRequests 200 KeepAliveTimeout 5 StartServers 8 MinSpareServers 5 MaxSpareServers 20 ServerLimit 256 MaxClients 254 MaxRequestsPerChild 4000 StartServers 2 MaxClients 150 MinSpareThreads 25 MaxSpareThreads 150 ThreadsPerChild 25 MaxRequestsPerChild 0

    Read the article

  • Can't start my phpMyAdmin

    - by vrynxzent
    i am creating my own portable server but i can't make it to run the phpMyAdmin, the mysql, php and apache is running except for phpMyAdmin. When i check Apache's error log, it states [Fri Nov 09 08:54:37 2012] [warn] pid file F:/Drive WebServer/Drive WebServer/bin/Debug/Apache2bak/logs/httpd.pid overwritten -- Unclean shutdown of previous Apache run? PHP Warning: PHP Startup: Unable to load dynamic library './php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 PHP Warning: PHP Startup: Unable to load dynamic library './php_mysqli.dll' - The specified module could not be found.\r\n in Unknown on line 0 [Fri Nov 09 08:54:37 2012] [notice] Apache/2.0.64 (Win32) PHP/5.2.17 configured -- resuming normal operations [Fri Nov 09 08:54:37 2012] [notice] Server built: Oct 18 2010 01:36:23 [Fri Nov 09 08:54:37 2012] [notice] Parent: Created child process 6784 i manually assign the exact path for this F:/php/ext/php_mysql.dll PHP Warning: PHP Startup: Unable to load dynamic library 'F:/php/ext/php_mysql.dll' - The specified module could not be found.\r\n in Unknown on line 0 but still the same error. i set this option in php.ini extension_dir = "./" another error goes pops out It says libmysql.dll is missing. PHP Version : 5.2.17 Any help would be appreciated. ;)

    Read the article

  • Missing Memory on Windows Server 2008

    - by Chris Lively
    I have a windows 2008 x64 server with 8GB of RAM installed. Task Manager and Resource Monitor both insist that 7.5GB of the RAM is in use. However, the memory list under Processes (Memory Private Bytes) doesn't add up. I do have Show Processes from all users checked and hand adding the numbers I come up with about 3.5GB of RAM. I also looked at the latest copy of SysInternals Process Explorer. And neither the Private Bytes or Working Set adds up to more than about 3.5GB of RAM in use. What's going on? ===== Update: I bounced the server to see what would happen with the memory utilization. After boot and regular operations began it sat at 3GB of RAM usage. 18 hours later, it's back up to 6.8GB of usage with no indication as to where the additional 3.5GB or so of RAM is being used. Here are links to screen shots of the resource monitor and task manager: Resource Monitor Task Manager Update 2: Well, I believe I located the problem. When I detached one of the larger databases from my sql server the amount of ram shown as "in use" dropped drastically. The Memory Private Bytes count barely moved. So I'm guessing that SQL server has some way of allocating memory where it doesn't really show up in any of the monitors. I went further and created a new database file, then transferred all of the data from the one I detached. Even though it has the same data, and the same transactions going through it, the memory in use has stayed low. Maybe there was some corruption in the DB? I'll leave it to the DB gods and go searching for another "problem" ;)

    Read the article

  • Plesk Uninstall Memory issue

    - by user115079
    I am trying to uninstall plesk from my VPS by running following command: yum remove sw-* psa-* plesk-* when i run this command i get following error: Running rpm_check_debug Running Transaction Test memory alloc (4 bytes) returned NULL. First time when i run above command, this mem alloc (4 bytes) was very big number like (67864987). then i googled it, got some clear/ulimit commands. executed them. rebooted my system. stopped all process and executed this command again. but still getting 4 byte issue. dont know how to get rid of it. I also tried ulimit after reboot but no success and Yes. No swap attached. these are stats of my system [root@vps ~]# free -m total used free shared buffers cached Mem: 384 67 316 0 0 0 -/+ buffers/cache: 67 316 Swap: 0 0 0 top - 21:01:07 up 3:12, 1 user, load average: 0.24, 0.08, 0.03 Tasks: 31 total, 2 running, 29 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 393216k total, 69832k used, 323384k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached is there any other alternative to achieve my goal to uninstall plesk? thanks.

    Read the article

  • PHP Web Server Solution (Apache/IIS)

    - by njk
    I apologize if this is too broad or belongs on Super User (please vote to move if it does). I'm in the process of creating requirements for an internal PHP web server to submit to our architecture team and would like to get some insight whether to use a Windows or *nix platform and what applications would be required. The server will host a small PHP application that will be connecting to SQL Server. The application will need to send mail. We would also like to incorporate a FTP server to allow files to be dropped in. From what I've read regarding a Windows platform using IIS, it seems as though IIS would only be advantageous if using a .NET or ASP application. Does IIS have mail functionality? Or how is mail traditionally configured (esp. on *nix)? Also, does IIS have directory configuration functionality like Apache does with .htaccess? For a Windows based solution; IIS (comes with FTP) Apache (has mod_ftp module) For a *nix based solution; Apache

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • Windows Server 2008 R2 grinds to a screeching halt during file copy operations

    - by skolima
    When my Windows Server 2008 R2 machine is performing any large disk operations (copying 10GB files from one drive to another, copying similar file over network, merging HyperV snapshots, compressing large files), performance of the whole machine slows down terribly, everything becomes unresponsive. This is noticeable in any situation when the disk access is large enough not to fit in the cache. Are there any settings available for tuning this behaviour? I can accept slower file transfer if this would give me more responsiveness. System details: Dell Optiflex 960, Core 2 Quad Q9650, 8GB RAM, 2 SATA drives - 320GB (ST3320418AS) and 1TB (ST31000528AS), NCQ active on both, Intel 82564LM-3 Gigabit Ethernet, ATI HD 3450 graphics, Intel ICH10 bridge. We have multiple machines like this, every one is exhibiting the same behaviour. I though this was overkill for a workstation, apparently I was mistaken. Update: I guess I shouldn't have mentioned the HyperV at all. The above configuration is a standard workstation setup at the company I work for, this is not a server of any kind. I have at most 3 virtual machines working, and usually I'm the only person accessing them. Never the less, the slowdown occurs even when no VMs are running. On a Linux machine I'd simply ionice the copy process and I could forget about it, is there any way to manage IO priorities on Windows?

    Read the article

  • Accessing or Resetting Permissions of a Mounted Registry Hive of a Different User / From a Different System

    - by Synetech
    I’m currently stuck using my backup system until I can replace my dead motherboard. In the meantime, I have put my hard-drive in this system so that I can access my files and keep working on the backup system. Fortunately, I don’t have a permission issues with the files (the partitions are FAT32). The issue I’m having is with the registry. I need to import some of my settings from the hives of my (old? normal?) installation of Windows into the one I’m currently using. Settings from the system hives (SYSTEM, SOFTWARE, etc.) are fine, but the user hive is giving me trouble. I’ve copied the NTUSER.DAT file from my other drive and mounted it with the reg command. Most of the keys (eg Software) are fine and I can access them without problem, but some of them (particularly the Identities key where Outlook Express settings are stored) complains that it cannot be opened. If I open the permissions dialog, I get an error about being unable to view the current permssions. If I then ignore it and try to take ownership of the key and it’s subkeys, I get an access-denied error. If I then add permissions for my user account on this system, I get an error, however I am then able to see the subkeys and values of the key. If I then try to access the subkeys, I get the same original errors. If I repeat the process for each subkey, I can see their values and subkeys, and so on, but of course this gets to be incredibly annoying and time-consuming (especially since the Identities key has a lot of subkeys). Is there an easier/temporary/more correct way to dump a key so that I can import it into my backup system?

    Read the article

  • Suggestion regarding pointing domains to a dedicated server.

    - by Bizz
    I recently got a dedicated server and I am still at a learning stage. So please bear with me. I wanted to have 3 domains pointed to my server, but initially I asked them the process to point me one and they responded with: Hello, I can set that rDNS for you. Please make sure the domain is pointed to our name servers at godaddy. They are: ns1.xxxxx.net ns2.xxxxx.net After this is done, please allow up to 24 hours for global propagation. Alternatively, we can host your DNS for you if you prefer. What is the domain you would like xx.xxx.xx.xx to resolve to? I then asked him to point one of my domains. They responded, Did you want us to host your dns for that domain or just an rDNS record? They also said, Hosting your DNS is a free service here. We can only do 1 domain per IP. IF you would like to purchase additional IPs, they are $1/IP per month. I personally dont want to host DNS myself. Neither mail server. I have a single IP so far. It will then start to get expensive if I want to host 25 from these guys. I am still in the trial period. Does this seem reasonable as far as pricing goes? If I want to have some one host DNS and mail server, this is getting super expensive. Email hosting from rackspace starts at $2 per mail address, but from then on, its the same if you want added features such as archiving etc; What would you suggest I do if I am on a shoe string budget but I also want to avoid hassle of doing it myself and I only have 3 domains so far and I would need few mail addresses for each of them.

    Read the article

  • Determining how all memory is used in Windows Server 2008

    - by Mojah
    Hi, I have a Windows Server 2008 system, which has 12GB of RAM. If I list all processes in the Task Manager, and SUM() the memory of each process (Working Set, Memory (Private Working Set), Commit Size, ...), I never reach more than 4-5GB that should be "in use". However, task manager reports this server has 11GB in use via the "Performance" tab. I'm failing in determining where all that used RAM is going. It doesn't seem to be system cache, but I can not be sure. It might be a memory leak in one of the appliances, but I'm struggling to find out which one. The server's memory keeps filing up, and eventually forces us to reboot the device to clear it. I've been reading up on how RAM assignments work on Windows Server: RAM, Virtual Memory, Pagefile and all that stuff: http://support.microsoft.com/kb/2267427 What's the best way to measure? http://www.zdnet.com/blog/bott/windows-7-memory-usage-whats-the-best-way-to-measure/1786 Configure the file system cache in Windows: http://smallvoid.com/article/winnt-system-cache.html But I fear I'm stuck without ideas at the moment.

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • How to use non-free drivers during debian install

    - by blokeley
    I'm trying to install debian stable using unetbootin. The install process fails with "network autoconfiguration failed", probably due to the ethernet driver not working. My Lenovo U350 has a Broadcom BCM57780 which does not seem to be supported out-of-the-box: there are various bug reports here, here and here, but I don't know if the fix has made it into debian (6) stable. One discussion says that you have to use an ethernet driver from the firmware-linux-nonfree package. I'm not sure that this is correct because the BCM57780 is not in the list of drivers in firmware-linux-nonfree. The specific question tree is: Is BCM57780 supported in debian stable? If so, what could be wrong? Should I install debian unstable instead? If not, do I need to use firmware-linux-nonfree during installation and, if so, how do I do this? Please note: I've used ubuntu and debian loads in the past but please post line-by-line guidance rather than some cryptic abbreviation of any instructions. Thanks in advance for any help. Updates: Debian stable with non-free drivers did not work. Debian unstable (free drivers only) did not work. Tried loading firmware-iwlwifi_0.28_all.deb from another USB stick to get wireless working rather than BCM57780. The .deb file was found but the network configuration still failed! That's it, I'm giving up. Unfortunately I'll use ubuntu even though the Unity user interface will be very unstable for the next couple of years :(

    Read the article

  • Searching SharePoint site with Windows Explorer

    - by alexsome
    Every week, I manually backup recent versions of the files on my group's SharePoint site. I open the library in Windows Explorer, search for all files modified in the past week, then copy and paste them to a network location. We need this process because our SharePoint site has a quota that we would easily meet if we had unlimited versions, so we keep a history of older versions on the network. Recently I got an upgrade to my work computer and I am unable to search the site using Windows Explorer. When I run the search for files modified in the last week no results are returned. If I run a search with no criteria on the file library, all the files are found but the "modified on" field is blank. So the search results only have the file and type fields. The new computer has Windows XP, just like the old one did. I hope this makes sense. Does anyone have any clue what the problem could be? I'd be happy to provide more info if necessary. It's bugging me to no end and I'm not even sure where to begin looking - it's either a trivial issue or a very obscure one. Thanks a lot.

    Read the article

  • PHP 5.3.5 Windows installer missing php_ldap.dll

    - by nmjk
    I'm working with Windows Server 2008, Apache 2.2. I'm using php-5.3.5-Win32-VC6-x86.msi as the installer, using the threadsafe version. I've gone through the install process four or five times just to make sure that I'm not missing anything ridiculous, but I don't think I am. The problem is that the php_ldap.dll extension simply doesn't seem to exist. It's not present in the installer interface (where the user is asked to choose which extensions to install), and it definitely doesn't appear in the ext/ directory after install. I found a lot of mentions of this issue for 5.3.3, including links to download the extension individually. Those links no longer exist, of course, and besides: they were for 5.3.3. I'd really rather use an extension that belongs with PHP 5.3.5. Anyone else encounter this problem? Any ideas as to what's going wrong? Anyone seen acknowledgement by the PHP folks that the file is indeed missing, and that it's an oversight? It's quite a frustration because the server I'm building has no purpose if I don't have PHP LDAP support. Cheers all, and thanks in advance for your assistance.

    Read the article

  • "merging" multiple internet connections

    - by Spencer R
    I've seen this question asked several times here on SF, but I'm looking for some updated information; specifically concerning Server 2012. I'm in the process of buying a home so I'm trying to get some plans together on how I want to structure my network. Internet speeds aren't the greatest and connections can be unreliable where the house is so I was thinking of having two DSL lines installed. My question is, how could I leverage those two connections to create the best network I can, in terms of speed and reliability. My parents will be moving in with me - they consume a lot of bandwidth as it is, but then add my internet traffic to it, and I'm headed for a lot of frustration. I thought I remember reading somewhere that Server 2012 has some new functionality to utilize multiple connections on multiple NICs in a way that wasn't possible in earlier versions of Server. Not sure if Windows will work but, I'm an application developer and spend the majority of my time in Windows environments. However, I've only recently returned to the Windows world, so I'd like my main server at home to run Win Server 2012 so that I can become more familiar with it.

    Read the article

  • How to create a WHM/cPanel account, without creating a new sub-domain?

    - by Cyclops
    I have a basic VPS (full root access), with WHM/cPanel, and am learning the ropes. I'm trying to create a new account for an existing domain (mysite.com), and so far WHM won't let me - it either wants a sub-domain or fake domain, but won't allow two accounts for one domain. In the beginning, there was only the root account, and it wouldn't let me login to cPanel - a quick chat with tech support, and I am informed that I need to create a second account, which I did. So now I have an account, call it ns1me, for domain mysite.com. Now I want to create a django account. I go through the same process, but WHM won't allow me to use mysite.com as the domain for django. The docs recommend a sub-domain, so I fill the box in with django.mysite.com. I then realize that has actually created a sub-domain - going to django.mysite.com shows me its home directory, along with helpful information about what version of Apache, Python, and other mods its running (thanks, Apache). I really don't want a sub-domain, so that's out. Another chat with tech support, and they recommend a fake domain name, as it won't create anything. Sure enough, using a domain of djangomysite.com works, and WHM allows me to create a django account. But of course, I can't send email to [email protected] (where I could to [email protected]). What I want, is to be able to create a second account, associated with mysite.com (so I can run cPanel logged in as django, send email to [email protected], etc) - without creating a whole new sub-domain, or fake domain.

    Read the article

  • Am I safe on Windows if I continue like this?

    - by max
    Of all the available tons of anti-malware software for Windows all over the internet, I've never used any paid solution(I am a student, I have no money). Since the last 10 years, my computers running Windows have never been hacked/compromised or infected so badly that I had to reformat them(of course I did reformat them for other reasons). The only program I have for security is Avast Home Edition, which is free, installed on my computers. It has never caused any problems; always detected malware, updated automatically, has an option to sandbox programs and everything else I need. Even if I got infected, I just did a boot-time scan with it, downloaded and ran Malwarebytes, scanned Autoruns logs, checked running processes with Process Explorer and did some other things and made sure I cleaned my computer. I am quite experienced and I've always taken basic precautions like not clicking suspicious executables, not going to sites which are suspicious according to WOT, and all that blah. But recently I've been doing more and more online transactions and since its 2012 now, I'm doubtful whether I need more security or not. Have I been just lucky, or do my computing habits obviate the need to use any more(or paid) security software?

    Read the article

  • Audit file removal (auditctl)

    - by user1513039
    For some reason, some script or program is removing a pid file for the service on the linux server (centos5.4 / 2.6.18-308.4.1.el5xen). I suspect a faulty cron script, but manual investigation did not lead me to it. And i still want to track it down. Have been using auditctl rule: auditctl -w /var/run/some_service.pid -p w Which helped me to see something, but not quite exactly what i wanted: type=PATH msg=audit(11/12/2013 09:07:43.199:432577) : item=1 name=/var/run/some_service.pid inode=12419227 dev=fd:00 mode=file,644 ouid=root ogid=root rdev=00:00 type=SYSCALL msg=audit(11/12/2013 09:07:43.199:432577) : arch=x86_64 syscall=unlink success=yes exit=0 a0=7fff7dd46dd0 a1=1 a2=2 a3=127feb90 items=2 ppid=3454 pid=6227 auid=root uid=root gid=root euid=root suid=root fsuid=root egid=root sgid=root fsgid=root tty=pts0 ses=38138 comm=rm exe=/bin/rm key=(null) Problem here is that i see ppid of the script that removed the file, but at the analysis time the (p)pids are already invalid as probably scripts/programs have been shutdown. Imagine a cron script deleting the file. So i need some way to expand/add audit rule(s) to be able to trace the parents of the /bin/rm at the time of removal. I have been thinking to add some rule to monitor all process creation, something like: auditctl -a task,always But this happen to be very resource intensive. So i need help or advice how to combine these rules, or how to expand any of the rules to help track the script/program. Thanks.

    Read the article

  • Why is "start in" needed for Windows scheduled tasks?

    - by GomoX
    We develop a web application that can be deployed on Windows or Linux. The Linux implementation uses cron, and the Windows one uses scheduled tasks to run a single PHP script that processes all scheduled tasks for our system. The task is scheduled using schtasks during the install process, like: This has always worked both under W2003 and W2008. A week ago a customer reported that scheduled tasks were not running. He is running on Windows 2008. We checked over and over and finally solved the issue by entering the folder that contains the .vbs script as the "start in" folder for the scheduled task. This said, there is no way to set up the "start in..." value from schtasks without using an XML definition of the tasks. XML definitions don't work in Windows 2003, so I would have to add windows version detection to the installer, additional testing, etc (I'd like to avoid this if at all possible). The only atypical thing I noticed about the install is that the system is installed in D:\ as opposed to the default C:\Program Files (x86)\, but I don't see how this would matter. All the paths are absolute in all the scripts. Can anyone suggest a reasonable solution for this?

    Read the article

  • What steps should I take to secure Tomcat 6.x?

    - by PAS
    I am in the process of setting up an new Tomcat deployment, and want it to be as secure as possible. I have created a 'jakarta' user and have jsvc running Tomcat as a daemon. Any tips on directory permissions and such to limit access to Tomcat's files? I know I will need to remove the default webapps - docs, examples, etc... are there any best practices I should be using here? What about all the config XML files? Any tips there? Is it worth enabling the Security manager so that webapps run in a sandbox? Has anyone had experience setting this up? I have seen examples of people running two instances of Tomcat behind Apache. It seems this can be done using mod_jk or with mod_proxy... any pros/cons of either? Is it worth the trouble? In case it matters, the OS is Debian lenny. I am not using apt-get because lenny only offers tomcat 5.5 and we require 6.x. Thanks!

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine? [closed]

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

  • iptables, forward traffic for ip not active on the host itself

    - by gucki
    I have kvm guest which's netword card is conntected to the host using a tap device. The tap device is part of a bridge on the host together with eth0 so it can access the public network. So far everything works, the guest can access the public network and it can be accessed from the public network. Now the kvm process on the host provides a vnc server for the guest which listens on 127.0.0.1:5901 on the host. Is there any way to make this vnc server accessible by the ip address which the guest is using (ex. 192.168.0.249), without interrupting the guest from using the same ip (port 5901 is not used by the guest)? It should also work when the guest is not using any ip address at all. So basically I just want to fake IP xx is on the host and only answer/ forward traffic to port 5901 to the host itself. I tried using this NAT rule on the host, but it doesn't work. Ip forwarding is enabled at the host. iptables -t nat -A PREROUTING -p tcp --dst 192.168.0.249 --dport 5901 -j DNAT --to-destination 127.0.0.1:5901 I assume this is because the IP 192.168.0.249 is not not bound to any interfaces and so no ARP requests for it get answered and so no packets for this IP arrive at the host. How can make it work? :)

    Read the article

  • IIS Web Farm Framework servers are automatically set to "unavailable" even when they are healthy... And they never return to the available state!

    - by JohannesH
    I have 2 web farm configurations, one with 2 member servers and one with 3 member servers. I have health monitoring set up on both farms and the monitoring tool reports all servers as being healthy. However after a while all the servers are marked as being "Unavailable" and "Healthy" in the "Monitoring and Management" screen (in the "Servers" screen they are all listed with "Yes" in the "Ready for Load Balancing" column). Viewing the event log on both the web farm controller or any of farm servers doesn't reveal anything interesting. there are no warnings or errors in the period where the servers became unavailable. There are a couple of informational events about the worker process getting shut down due to inactivity but I don't hope this is the cause since that would mean that the farms will die during the night when the load is low. Am I missing something? EDIT: Btw, I think its very odd that the application pool shuts down on the servers since the health monitoring system is polling an aspx page on each server. Shouldn't that keep them going? EDIT2: Now I've also experienced this problem with the RTW version of Web Farm Framework 2.

    Read the article

< Previous Page | 665 666 667 668 669 670 671 672 673 674 675 676  | Next Page >