Search Results

Search found 24117 results on 965 pages for 'write through'.

Page 744/965 | < Previous Page | 740 741 742 743 744 745 746 747 748 749 750 751  | Next Page >

  • How do I setup an Alias on Apache with XAMPP on Linux ? (Permission problem)

    - by knarf
    XAMPP works fine but I want to have http://localhost/f to point to /home/knarf/prog/php/fwyxz. I've chmod -R 777 /home/knarf/prog/php/fwyxz I've added Alias /f /home/knarf/prog/php/fwyxz at the end of the httpd.conf And when I try to access it, I get a 403. From the apache error_log: [error] [client 127.0.0.1] (13)Permission denied: access to /f denied. I've already tried several solutions (userdir and symlinks) but they both failed with the same error. I've also tried to add this after the Alias: <Directory "/home/knarf/prog/php/fwyxz"> Order allow,deny Allow from all </Directory> But again, permission denied. Now if I change the User/Group under which apache runs from nobody to knarf, it seems to work (static files are ok) but PHP can't use/initialize sessions : [error] [client 127.0.0.1] PHP Warning: session_start() [function.session-start]: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in /home/knarf/prog/php/fwyxz/index.php on line 3 [error] [client 127.0.0.1] PHP Warning: Unknown: open(/tmp/sess_r5nrmu4ugqguqqe83rs53lq6k0, O_RDWR) failed: Permission denied (13) in Unknown on line 0 [error] [client 127.0.0.1] PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct () in Unknown on line 0 This is really frustrating.

    Read the article

  • is there a way to run a command before puppet implements a change?

    - by Patrick
    I want to have puppet run a specific command before performing any type of change. I am aware of the prerun_command option in the main puppet.conf, but this is not what I'm looking for. I want the command to only run if something is about to change, not on every puppet run. Here's the scenario. Let's say I have a bunch of web servers behind a load balancer. I then want puppet to update the web site files. But in order to prevent issues where some files have been updated, but other files haven't, and the mixed versions causing problems, I want to take the server out of the load balancer pool. I could write a script which when run will tell the load balancer to remove the box from the pool. Then puppet can do the change, and use postrun_command to put the box back in the pool once complete. But I need a way to run that script to remove the server from the pool. The only solution I can think of is to keep 2 copies of the files on the box. One a staging copy, and when puppet updates that, use a notify action to trigger the removal script, and then copy from staging into the live location. But I was hoping for something a little more generic that would work on any change being performed (upgrading a package, restarting a service, creating a user, anything).

    Read the article

  • SSL Mail server connection times out on send()

    - by Jivan
    When trying to programmatically send an email from a website of mine, with PHP Pear Mail package with SSL connection, PEAR:Mail replies the following : Failed to connect to example.blabla.net:PORT [SMTP: Failed to connect socket: connection timed out (code: -1, response: )] I looked for similar questions on SO and SF, all the answers asking the OP to test a request on telnet or ssh in command line. So, that is what I did and here is what happens : $ ssh -l myusername -p PORT example.blablabla.net _ Here, '_' in the second line means that NOTHING happens. Indefinitely, which seems coherent with the timeout message I had from PEAR:Mail. So PEAR:Mail seems out of cause. But, what I have to tell you is that yesterday, it just worked. Connection was properly established, mails were properly sent, etc. Just today, it doesn't work anymore and I absolutely don't know why. I restarted Apache (in case an extension was broken), restarted mail services, etc. Still. No effect. Before yesterday (when it worked) and today (when it doesn't anymore), I just didn't touch the server and did nothing on it, simply because I took a day off to write some blog post! Have anyone of you encountered similar problem ? The problem seems quite common, judging after some googling, but the solution doesn't. Thanks for any help ! (note on config : CentOS 6.4 x86_64 with cPanel/WHM)

    Read the article

  • Fixing Windows install by connecting it's hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

  • Fixing Windows install by connecting its hard drive via USB to a different laptop

    - by Jason
    I tried to upgrade a laptop to SP3, which broke it. I later found out SP3 doesn't work on that 2002 laptop. I can't uninstall SP3, or fix SP2, because the hard drive is now not detected during setup (I've read that's the problem you get). I put the hard drive in a USB drive case and plugged it into my other laptop, and I can read (& write to) the disk okay. (The hard drive won't fit in my other laptop, so I'm using USB.) I need to get that disk back to SP2, or fix whatever files got screwed up causing the disk to not be recognized. I don't want to do a re-install as there are 80GB of files on it I need, and they won't fit on the HD of my other laptop, and also because I no longer have some of the install CDs for software on it. What do I need to do to fix that drive from my other laptop? (I don't want my working laptop (XP SP3) to get screwed with by putting an SP2 disk in the CD drive, or the non-o/s data on the other hard drive screwed with.)

    Read the article

  • Rsync root files between systems without specifying password

    - by xpt
    This seems very tricky to me. I've set up my two systems so that I can rsync files between them as me, without specifying password. Now the the problem is to rsync files that belong to root. On both of my systems, there are no root passwords. The only way to become root is via sudo. So I can neither give a password for sudo rsyn local root@remote:, no use my ssh-agent to supply pass phrase. I don't want to set up a root password on any systems; and I do need the files to be owned by root on both systems. EDIT: Using the files that belong to root is just an example, I need a way for my unprivileged account to read/write system (including root-owned) files easily. One example is to copy my configured /root environment into the freshly-installed system. The two systems are actually two VMs under a single host, so it's not a big concern for me to copy root-owned files between them. EDIT 2: If I only want to copy my configured /root environment into the freshly-installed system, I can use tar: sudo tar cvzf - /root | ssh me@remote sudo tar xvzf - -C / But I do need rsync to update from time to time. Any easy way to make it happen? EDIT 3: Formally formulate the question Alright, it all began with the question, how to rsync files that belong to root between two systems as a normal unprivileged user, without specifying password, under the condition that, The root account is locked on both of systems. I.e., there are no root passwords. The only way to become root is via sudo (recommended security practice, see http://help.ubuntu.com/community/RootSudo) I don't want a completely passwordless sudo but don’t want to be typing passwords all the time either. The normal unprivileged user has entered their ssh pass phrase into the ssh agent. Thanks

    Read the article

  • Constructor and Destructor of a singleton object called twice

    - by Bikram990
    I'm facing a problem in singleton object in c++. Here is the explanation: Problem info: I have a 4 shared libraries (say libA.so, libB.so, libC.so, libD.so) and 2 executable binary files each using one another shared library( say libE.so) which deals with files. The purpose of libE.so is to write data into a file and if the executable restarts or size of file exceeds a certain limit it is zipped and a new file is created with time stamp in name. It is using singleton object. It exports a handler class for getting and using singleton. Compressing only happens in the above said two cases. The user/loader executable can specify the starting name of file only no other control is provided by handler class. libA.so, libB.so, libC.so and libD.so have almost same behavior. They all have a class and declare and object of an handler which gets the instance of the singleton in libE.so and uses it for further purpose. All these libraries are linked to two executable binary files. If only one of the two executable runs then its fine, But if both executable runs one after other then the file of the first started executable gets compressed. Debug info: The constructor and destructor of the singleton object is called twice.(for each executable) The object of singleton is a static object and never deleted. The executable is not able to exit/return gives: glibc detected * (exe1 or exe2): double free or corruption (!prev): some_addr * Running with binaries valgrind gives that the above error is due to the destructor of the singleton object. Thanks

    Read the article

  • Corsair SSD appears completely blank and does not retain written data

    - by ebanders
    I have a 180GB Corsair SSD (model# CSSD-F180GB2-BRKT) as the primary drive in a Windows laptop. Recently the machine became unbootable after installing Windows updates. Windows installed updates before the machine shut down and the next time the machine boot up it complained about not being able to find a bootable device. After finding fixmbr unsuccessful at making the machine unbootable, I investigated a little within knoppix. Fdisk revealed an empty partition table. A scan by Testdisk came up empty. And finally 'head -c 1024 | hd' reveals all zeros. Creating a primary partition spanning the whole disk completes successfully, but after a reboot the disk appears empty again. dmesg reveals no read or write errors. smartctl indicates that the drive is healthy- although the SMART attribute values do not appear to be read properly. "Data Page | WARNING: PREVIOUS ATTRIBUTE HAS TWO" and "Threshold Page | INCONSISTENT IDENTITIES IN THE DATA" messages appear within the table of values. I don't have much experience with SSDs. Is this drive dead or something? Can anyone recommend any diagnostic tools that may be suited for diagnosing SSDs?

    Read the article

  • DVD/CD burning .zip: is it more reliable, faster, longer lasting to burn a zip of files rather than the files as a folder?

    - by Rob
    Is it more reliable, faster, longer lasting to burn to CD/DVD a zip (or a few large zips) of files rather than the files as a folder? Just thinking if 1000s of small files would not be as efficiently recorded compared with one or a few large zips. Also, even after the burning program verifies the disc, I also use Beyond Compare to compare the files with those on the disc. Always binary compares as identical but I hear the drive stuttering presumably as the head is being shifted only slightly each time to seek the next file, which leads me to think that its best to make one or more zips and copy those locally to compare. Or is it that burning invidual files to the disc is not as readable which causes the head to stutter. There aren't any problems, my disc burns are reliable, just thinking more of efficiency and longevity, the discs burn and verify fast enough on my 18x DVD burner. I'm using ImgBurn mostly. Also used Nero in the past. I burn whole discs closed, finalised. Not sure which write mode but would think Disc At Once from a temporary cached image made by the burning program would be the most reliable.

    Read the article

  • How should an experienced Windows SysAdmin learn Linux? [closed]

    - by Systemspoet
    I have a new hire starting in a few weeks who is an experienced Windows SysAdmin. I think he's fairly senior on the Windows side, with a pretty deep AD understanding and experience with Exchange 2007, 2010, and exchange migrations. He's done a little PowerShell but I suspect more of the "run this command to do this" variety then "write a script to do this" sort. However, we are a mixed shop and (he knows this) I expect him to become a reasonably competent Linux SysAdmin over time. I'm looking for good starting points to bring him along. I have over ten years of Linux/UNIX experience, so it all sort of seems intuitive to me, but I've been thinking about the toolkit you actually need to be productive in the Linux CLI world. Just to be able to use the machines at all, off the top of my head... vi Basic CLI stuff -- move around, rename files, copy files, tar, gzip, changing passwords, finding relevant manpages, keep track of where you are, find things in your history, etc, etc. More advanced things that I take for granted but are actually pretty hard -- doing things with 'find', extracting relevant text via 'awk' and/or 'cut', knowing when to use 'grep' and when to use 'grep -e' or 'egrep'. Distribution specific stuff... compiling software, rpm, yum, apt-get, you name it. This all seems pretty basic to me, but when I think back to 1995 when I was first learning my way, some of those things took me years to master. So my question is -- where should I send him to pick up those skills? I'm not just thinking of classes, but rather also websites and books? Where do you all suggest as a starting point for picking up Linux skills?

    Read the article

  • Server redirect

    - by Tyy
    I have asp.net app XYZ which requires SSL. I am supposed to work with IIS that has only one Web Site - DefaultWebSite - containing multiple apps and virtual directories (3rd party). So, my app is located at domain.com/XYZ. I have to meet these conditions: 1.) requesting DefaultWebSite (domain.com) will run my app. It could redirect to ../XYZ, but I would rather not to. If it has to be done this way, requesting DefaultWebSite or my app over both HTTP and HTTPS will always ends up with redirecting to https://domain.com/XYZ 2.) I can't touch any other apps or virtual directories, can't create additional Web Site, can't set DefaultWebSite to require SSL, ... EDIT: 3.) transmitted data (GET or POST) must be preserved I tried to: set Web Site root directory to my app, but it this caused other apps to crash because of my Web.config (not sure why). set up HTTP redirect on DefaultWebSite to https://domain.com/XYZ. This seems to work correctly, but this doesn't work if user requests my app directly (redirected to domain.com/XYZ/XYZ, or redirect loop). set up Default Document, but this seems to work only if it is located in the Web Site root directory. I know I could write simple .aspx with Response.Redirect, but... is there any better solution? Am I missing something?

    Read the article

  • Hyper-V snapshots – unable to start VM

    - by ahmedz
    I restarted my Host server after shutting down three guest VMs. After I restarted the machine I tried to start the VMs and got an error stating the the VM failed to start. SERVERNAME failed to start. Attachment 'avhd file path' is read only. Please provide read/write access to the attachment. Error: 'General access denied error' SERVENAME failed to start. (virtual machine ID 17292200-wd22-dd22-d23-dddddd2222) The issue seems to be with the disk space. The VHD file for this VM is 128 GB and there are two AVHD files of 58 and 75 GB. Whereas the total disk space on this drive (E) is 280 GB - the free space is only around 23 GB. I understand that the error is caused by the unavailability of the required disk space. Unfortunately, I cannot increase the disk space on this drive. However I have another drive (D) that has 400 GB of free space. I exported this VM to D drive and then tried to add the copied AVHD files but it gives me a similar error. I am running Windows Server 2008 R2 Datacenter. Any help is appreciated.

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • SQL Server 2005 Disk Configuration: Single RAID 1+0 or multiple RAID 1+0s?

    - by mfredrickson
    Assuming that the workload for the SQL Server is just a normal OLTP database, and that there are a total of 20 disks available, which configuration would make more sense? A single RAID 1+0, containing all 20 disks. This physical volume would contain both the data files and the transaction log files, but two logical drives would be created from this RAID: one for the data files and one for the log files. Or... Two RAID 1+0s, each containing 10 disks. One physical volume would contain the data files, and the other would contain the log files. The reason for this question is due to a disagreement between me (SQL Developer) and a co-worker (DBA). For every configuration that I've done, or seen others do, the data files and transaction log files were separated at the physical level, and were placed on separate RAIDs. However, my co-workers argument is that by placing all the disks into a single RAID 1+0, then any IO that is done by the server is potentially shared between all 20 disks, instead of just 10 disks in my suggested configuration. Conceptually, his argument makes sense to me. Also, I've found some information from Microsoft that seems to supports his position. http://technet.microsoft.com/en-us/library/cc966414.aspx In the section titled "3. RAID10 Configuration", showing a configuration in which all 20 disks are allocated to a single RAID 1+0, it states: In this scenario, the I/O parallelism can be used to its fullest by all partitions. Therefore, distribution of I/O workload is among 20 physical spindles instead of four at the partition level. But... every other configuration I've seen suggests physically separating the data and log files onto separate RAIDs. Everything I've found here on Server Fault suggests the same. I understand that a log files will be write heavy, and that data files will be a combination of reads and writes, but does this require that the files be placed onto separate RAIDs instead of a single RAID?

    Read the article

  • How do I do a mail merge that includes images? (Maybe in Word 2007)

    - by Ian Ringrose
    I am trying to find out the practicalities of doing a mail merge when each “record” to be merged on includes some images. I need to: print letters And envelopes Both the letters and the envelopes have: Fixed text Fixed images Text that come from the mail merge record Images that come from the mail merge record I don’t know if all images will be the same size for every record, so a bit of simple “on the fly” automatic formatting may be needed . I need to be able to repeat a single item if I get a problem (e.g when folding the letter). What problems am I likely to have? Is Word 2007 up to this sort of mail merging, or should I be looking at a report writing tool? How do I restart a print run after a printer jam etc? What format should I store the “records” and there images in? E.g Can standard software cope with images that are stored in separate files named after the “CustomerId” that is in the “record” (I can write custom software if needed, but would rather use standard “of-the-shelf” software for the printing, I am planning on custom software for the data creation, so can output in whatever format is needed)

    Read the article

  • Putting and configuring grub on an external drive

    - by HisDudeness
    I want to put a bunch of minor emergency operating systems (such as GParted Live, DSL, Puppy Linux and so on) on a partitioned USB pen drive, with a dedicated grub boot loader on it, which I want to start when I select the drive in the BIOS to boot. The problem is: when writing grub boot options I must tell where kernel and initial ram files are located, but the USB drive can have different letters depending on when I did plug it in, if some others external drive were mounted. So, how can I write appropriate options which automatically refer to the drive grub is installed on without having to specify absolute paths, which might change (I mean, like (hd1,msdos1) ot /dev/sdb1)? And, while we are at it, can I have grub working on a device without an operating system on it to which it can refer? I mean, I want to address the command sudo grub-install /dev/sdb from the LMDE system I'm on right now, but I won't have LMDE on my pen drive. Is that a problem? And installing grub on another device, will I keep the grub I have right now on my HDD?

    Read the article

  • Identifying Exchange 2010 regular process that is walking the mailbox database

    - by toongeneral
    I have an Exchange 2010 server running on a SAN-backed platform. The platform does block-level backups based on a snapshot/incremental basis, that only capture changed data. I was surprised to see a regular period of time where the data changes were happening at a high, sustained rate. Due to the way this system works, that can lead to 1.2TB of stored data per month. The regularity implied a scheduled task, but it is not a fixed interval. It is approximately every 26-32hrs. The disks were performing read operations of ~5MB/s and write operations of ~4.5MB/s, for a period of 3-4hrs. The total written data was ~55-60GB. Reading on TechNet, I am wondering if the following is causing this: http://blogs.technet.com/b/exchange/archive/2011/12/14/database-maintenance-in-exchange-2010.aspx#checksumming The somewhat restrictive thing is that the process only happens at most once every 24 hours. I was able to investigate while it was running, finding the following: the process is store.exe it is working on the mailbox database files while running, it is generating .log files (in the mailbox database folder) consistent with database changes the mailbox database is ~60GB in size, which fits with the total data changes on each iteration I have currently switched to a fixed maintenance window, as a test. It's not clear whether this is the cause, as the symptoms fit, but are not conclusive. Does anyone have any suggestions for additional troubleshooting?

    Read the article

  • Applications are being opened by IE instead of running normally

    - by Star
    I rewrote the Question to add everything that i tried so far. Many of my applications are being opened by Internet Explorer. (not all) For example when I run Firefox.exe (from shortcut) I get IE run instead, with the following URL http: // %22d/ Browser/firefox.exe%22 (I added spaces to prevent link creation) the shortcut target is: "D:\Browser\firefox.exe" when I attempted to open firefox.exe from it's folder the results were the same as the previous one I attempted to open it by cmd, so i navigated with cmd to the FF path then wrote: firefox.exe the was the same except that the URL was: http: // Firefox.exe/ when i jsut write firefox the result URL was: http: // Firefox/ (is it some kind of parameter or something??) trying the same with chrome resulted the same results as the previous tests. I tried creating a new user (adminstartor) but the problem still there. I tried every registry key with exe on it (not sure if i tried them all) no change I tried removing IE but came back by itself somehow, meanwhile IE is removed, FF and its fellow apps gave me open with window I tried reinstalling the applications but it just no use. Time Line: (as requested from @Daredev) I don't know when it happened because the computer is for the company i work for and it was like that since i got it. (The IT there gave up on the problem lon time ago!). applications were installed already are "firefox" and "XPS viewer" . applications were working after the problem everything except what uses browsing (MS help viewer, XPS viewer, firefox-even I've re installed it-, opera, chrome) that what I thought but after installing Maxthon , comodoDragon this theory was blown away. system info: 1- windows xp professional service pack 3 2- system fully patched: Yes 3- anti-virus up to date: Yes 4- same behavior when booting into safe mode: Yes

    Read the article

  • How to set up multi users on dev server with git and github

    - by Derek Organ
    I'm working on lamp application. We have 2 servers (Debian) Live and Dev. I constantly work on dev main to add new features and fix bugs. When happy all works well I scp the relevant code to the Live system. Database (mysql) is local to each machine. Now this is pretty basic setup really and I want to improve the workflow a bit. I use git and github for version control. Admittedly I've only really used one branch. Their can be 3 different developers who work on the code at different times. We all use the same linux username to connect to the dev server and edit the code directly when needed. I usually then commit and push the code at the end of the day to github. One thing to bare in mind is it isn't easy to run this code on a local machine as there are many apache and subdomain configurations that wouldn't work on a local machine so it is important to work on the dev server not locally. I need to create a new process because we need to have a main trunk now and a branch with a big code re-write. What is the best way to do this. Should I create different unix logins for each developer and set up different working areas on the dev server for there changes? e.g. /var/www/mysite_derek /var/www/mysite_paul /var/www/mysite_mike my thinking is they can do a pull from the main branch and then create there own branch and merge it back in. I'm not sure how this will work though with git locally and with github. will i need to create different github user accounts as well. I'd like to do this the 'right' way and future proof for having lots of potential developers but I also don't want to over complicate it. I simple and elegant solution is preferred. any recommendations or suggestions?

    Read the article

  • Protocol (or service publish/discovery) to detect devices in network

    - by Gobliins
    we connect some embedded devices in a network. What i am looking for now, is a way to find the devices IP and identify them. We work with Windows PC´s and i am about to write a C# tool that should do this. I thought about send a udp broadcast and in the ack i.e. is the device´s ip, which would mean the device needs a daemon runnig to assign an ip itself. Running a service (like a printer) on the device, and on the PC just lookup for the service. I read about some things like apipa, zeroconf, ipv4 local link, bonjour, dns-sd, mdns, bonjour; They can automatically assign ip´s and publish services in a network. My Question is, can someone recommend me what would be good for my task? -The protocol or Service should be low on ressource (memory/cpu usage) use. -Are there some standard protocolls to use? -Is DNS a good idea or would it be to ressource consumpting just for finding a device´s IP? -Should also work when no dhcp servers are around. edit: To clarify a bit: The IP configuration is automatic. The problem to focus is how to tell the PC which IP in the network (or a direct connection in this vase there would only be one) belongs to the device (identity).

    Read the article

  • Cloned Centos 6.4 websrver for test purpose. Virtual host, .htaccess, redirecting url issue

    - by Shogoot
    I see similar questions, but not my exact challenge. What I have done so far I cloned a prod server over to a vmware to use it as a test server for new functionality I'm going to write. I'm not a sysadmin by trade, but I'm new to this company and I have to do some thing that are outside of my comfort zone (thats a good thing :) ) The prod server has 2 sites on it s1.com and s2.com. In /html/s1/, /html/s2/ there's an .htaccess file under each s*/. Looking like this: RewriteEngine ON RewriteBase / RewriteCond %{QUERY_STRING} id=([0-9]+) RewriteRule ^.* %1.htm RewriteCond %{QUERY_STRING} page=modules/checkout RewriteRule ^.* order.php RewriteCond %{QUERY_STRING} page=pages/sidekart RewriteRule ^.* pages/sidekart.htm The issue is that s1 has a lot of pages that really belongs under a third domain s3, the rule in line 4 and 5 redirects them to /html/s1/. An example of such URL is: s3.com/?page=modules/product&id=521614 I'm trying then to get those URLs (without modifying the URL) to redirect to s3's /html/s3/ server structure, which I set up making a new virtualhost s3 in test servers httpd.conf with a test3.com as servername and changing the other sites to tests1.com and tests2.com, and adding .htaccess also to this s3 root directory, and making a html/s3/ directory structure I populated with an index.html, etc. But, when I take the same URL (s3.com/?page=modules/product&id=521614) changing it to tests3.com/?page=modules/product&id=521614, I get s1's index page showing up in my browser. I've poked around about a day now and i cant figure out why this happens.

    Read the article

  • linux kernel buffer memory is zero

    - by user64772
    Hi all. There are one qestion that i can`t find in google. I have many linux boxes mostly with SLES or openSUSE, diffrent versions and kernels. On some of them i faced with slow oracle transactions problem. It time to time problem and when i log in the box on that time i see that oracle blocked in kernel function sync_page # while :; do ps axo stat,pid,cmd,wchan | egrep '^D|^R'; echo --; sleep 5; done D 3483 hald-addon-storage: polling ide_do_drive_cmd Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page D 12457 [smtpd] sync_page R+ 12458 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12501 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12535 ps axo stat,pid,cmd,wchan - -- Ds 4635 ora_dbw0_orcl sync_page Ds 4637 ora_lgwr_orcl sync_page Ds 4639 ora_ckpt_orcl sync_page D 11210 oracleorcl (LOCAL=NO) sync_page R+ 12570 ps axo stat,pid,cmd,wchan - -- so i think that box is run out of memory for disk buffers but memry is fine total used free shared buffers cached Mem: 4149084 3994552 154532 0 0 2424328 -/+ buffers/cache: 1570224 2578860 Swap: 3148700 750696 2398004 i think that this is the problem, buffer is zero and we must write directly to disk, but why buffer is zero ? - i try to google it and find nothing - is anyone can help ?

    Read the article

  • Winodws server 2003 Setup

    - by Barracksbuilder
    I work at a university maintaining the computer science department server. I am looking for a more economical way to stream line the set up of student accounts. CS students are granted a Username and password an IIS virtual directory, FTP virtual directory, and a mysql database. Server is running windows server 2003R2 (Possibly migrating to 2008R2) The server is running a domain though no students physically log a terminal into it (No computers are part of my domain.) Creating the account is a manual process. I did right a PHP script to query the Universities AD and copy the information and write it to my AD. I then have to create basically the users home directory. I tried having AD do it but since the user never physically logs in it never creates the directory. Permissions on this folder are set to User - full, Instructors (group) - full, Users (group) - read, IUSER - read. Inside of the users folder their is a "Private" folder with permissions User - full, instructors (group) - full. Next step is IIS I create a virtual directory in the default web site pointed to the users home directory so they have a website. Same goes for FTP virtual directory in the default ftp configuration to allow the users to upload files to their website. Mysql I have to create a user and password then create a mysql scheme (database) full access for the user and full access to the instructors account to be able to access the students database. All of this is done manually and takes me a week to do. The closest description is maybe a shared hosting environment. Is there a better way to do this? Scripting wise, or better structure setup?

    Read the article

  • Folder Sharing NTFS permissions with Share Permission

    - by Muhammad Adly
    i have a problem on my domain, the history starting from when i had a server with WIN 2008 r2 installed with the following roles installed on it (AD, DNS, DHCP, File). From 1 month i decided to install a new server 2008 r2 server to get (AD, DNS, DHCP) and leave the file server on the old one. i did the following exactly: 1) robocopy all my data on external HDD 2) Install a new server with 2008 r2 3) transfer all 5 roles to transfer the domain to the new server (MainDC) 4) issue (NETLOGON, SYSVOL) not transferred but i decided to reinitialize them again an now they are operating (MainDC) 5) re-create and re-configure a new GPOs and link it to my OUs 6) reinstall Old server operating system with a fresh installation of WIN 2008 R2 (FileServer) 7) join my domain with my domain credentials. the issue when i tried to share folder on \fileserver the permissions that i had set in sharing permissions are applied on the main shared folder and subfolders. the security settings are not applied. i.e. Say i'm sharing \fileserver\MainFolder with sharing permission for Authenticated Users that can read, so every one can read this main shared folder, if i set security permission for \fileserver\MainFolder\User1 that User1 can Read\Write\Modify. User1 can not perform this processes when accessing it from Network Share, i tried alot of steps from topics online get ownership for folder, remove inheritance from parent folder, applying changes for child objects, i tried also to construct a new folder structure but also the same issue, i tried another host PC, also i get the same issue.

    Read the article

  • CentOS: AJAX/jQuery not working

    - by Australiya
    In a nutshell, I have an unmanaged VPS. At one time, it had Ubuntu 10.10 server on it, then I reinstalled it with CentOS 6 and updated it to CentOS 6.2. Now, the problem is, the AJAX/jQuery shoutbox has ceased working (I assume it uses one of the two to inject itself into a div and then refresh when new messages are posted, I'm not sure, I didn't write these), and the plug-board script now shows me a lime green blank page. No changes to the source codes have been made, and they are in the location they expect themselves to be. I have Apache2, MySQL 5, PHP 5, and I did install the php-xml libraries. What am I missing? It's gotta be server side because the scripts themselves are fine, if I move them to a different server they work just fine. I'm not getting any errors related to this in the error_log file. Thanks in advance! Edit: If you want, you can look at the plugboard at kazeshini.net/plugboard and there's an installation of the chatbox at silverlotus.kazeshini.net/yshout/example, I know nothing about scripts and debugging so better someone else looks at it than someone that doesn't know what they're looking for.

    Read the article

< Previous Page | 740 741 742 743 744 745 746 747 748 749 750 751  | Next Page >