Search Results

Search found 24207 results on 969 pages for 'anonymous users'.

Page 818/969 | < Previous Page | 814 815 816 817 818 819 820 821 822 823 824 825  | Next Page >

  • Microsoft Word 2008 on the Mac sometimes "Disappears" documents, really.

    - by Ross Charette
    This happens in a computer lab environment, has happened at least 3 times. We are running Microsoft Office 2008 for mac on Leopard, everything is updated. Our user's home directories are on a network drive, but the /Library/Cache folder is running locally. Typically a student will have a Word file that they have been working on, it's been saved before they even logged onto the computer that day. They log on, open the document, click the save icon (not go to File Save), sometimes even save multiple times, then close Word. The document is now gone. It's not hidden, there are no autosaves or anything in the Cache folder. Definitely not in the trash or trashes folder. It can't find it when you click on it in 'recent documents'. Searching meticulously though every folder in their home drive turns up nothing. They look using Finder, I look ssh'd as root into their home using ls -la. I look for similar files in case they renamed it by mistake. It's gone. Disappeared. Vaporized. It's happened to at least 3 different users in the past year. Much whining. Any idea?

    Read the article

  • Designing a persistent asynchronous TCP protocol

    - by dogglebones
    I have got a collection of web sites that need to send time-sensitive messages to host machines all over my metro area, each on its own generally dynamic IP. Until now, I have been doing this the way of the script kiddie: Each host machine runs an (s)FTP server, or an HTTP(s) server, and correspondingly has a certain port opened up by its gateway. Each host machine runs a program that watches a certain folder and automatically opens or prints or exec()s when a new file of a given extension shows up. Dynamic IP addresses are accommodated using a dynamic DNS service. Each web site does cURL or fsockopen or whatever and communicates directly with its recipient as-needed. This approach has been suprisingly reliable, however obvious issues have come up and the situation needs to be addressed. As stated, these messages are time-sensitive and failures need to be detected within minutes of submission by end-users. What I'm doing is building a messaging protocol. It will run on a machine and connection in my control. As far as the service is concerned, there is no distinction between web site and host machine -- there is only one device sending a message to another device. So that's where I'm at right now. I've got a skeleton server and a skeleton client. They can negotiate high-quality authentication and encryption. The (TCP) connection is persistent and asynchronous, and can handle delimited (i.e., read until \r\n or whatever) as well as length-prefixed (i.e., read exactly n bytes) messages. Unless somebody gives me a better idea, I think I'll handle messages as byte arrays. So I'm looking for suggestions on how to model the protocol itself -- at the application level. I'll mostly be transferring XML and DLM type files, as well as control messages for things like "handshake" and "is so-and-so online?" and so forth. Is there anything really stupid in my train of thought? Or anything I should read about before I get started? Stuff like that -- please and thanks.

    Read the article

  • Suggestions for transitioning to new GW/private network

    - by Quinten
    I am replacing a private T1 link with a new firewall device with an ipsec tunnel for a branch office. I am trying to figure out the right way to transition folks at the new site over to the new connection, so that they default to using the much faster tunnel. Existing network: 192.168.254.0/24, gw 192.168.254.253 (Cisco router plugged in to private t1) Test network I have been using with ipsec tunnel: 192.168.1.0/24, gw 192.168.1.1 (pfsense fw plugged in to public internet), also plugged in to same switch as the old network. There are probably ~20-30 network devices in the existing subnet, about 5 with static IPs. The remote endpoint is already the firewall--I can't set up redundant links to the existing subnet. In other words, as soon as I change the tunnel configuration to point to 192.168.254.0/24, all devices in the existing subnet will stop working because they point to the wrong gateway. I'd like some ability to do this slowly--such that I can move over a few clients and verify the stability of the new link before moving critical services or less tolerant users over. What's the right way to do this? Change the netmask on all of the devices to /16, and update gateway to point to the new device? Could this cause any problems? Also, how should I handle DNS? The pfsense box is not aware of my Active Directory environment. But if I change DNS to use the local servers, it will result in a huge slowdown as DNS queries will still be routed over the private t1. I need some help coming up with a plan that's not too disruptive but will really let me thoroughly test the stability of the IPSEC tunnel before I make the final switch. The AD version is 2008R2, as are the servers. Workstations are mostly Windows XP SP3. I have not configured the 192.168.1.0/24 as a site in AD sites and services.

    Read the article

  • Windows 7 Account Settings Vanished

    - by Roy Tang
    So I came home and stumbled upon a bit of a mystery. When I got home my brother was using my desktop PC that was running Windows 7 (I have accounts for my 2 brothers and my mom on the machine, but mine is the only admin account). After he finished his game he logged out and I logged in to my account, but found only strangeness. My windows account seems to have been somewhat "reset", meaning: my quick launch shortcuts were gone my dropbox account did not automatically login my pidgin accounts were no longer there I had to relogin Steam iTunes could not launch (I had hooked up my iDevice before logging in) The Documents/Pictures/Music shortcuts in the start menu no longer work However, despite that: my desktop wallpaper was still correct my documents folder was still there in c:\Users\my account name\My Documents as expected Google Chrome settings seem to have been retained other accounts on the same machine seem to be fine I asked my brother if he had installed anything strange during the day, he only installed Yahoo Messenger. I last used the machine around 24 hours ago and it was fine then. I'm not sure what else has been affected. I'm inclined to just create a new admin user for me to use, but I'd like to have some idea of what actually happened.

    Read the article

  • What are secure ways of sharing a server (ssh+LAMP) with friends?

    - by Bran the Blessed
    What is the best way to share a virtual server with friends? More precisely, I have the following assets: A virtual private server (Debian Lenny) with root access for myself, running... SSH apache2 mysql Some unused disk space Some friends in need of hosting The problem I would now like to do the following: Hosting one or several domains per friend My friends should have full access to their domains, including running PHP scripts, for example My friends should not be able to poke around in other directories The security of my server should not be compromised by faulty PHP scripts To clarify: I do trust my friends in the sense that they are not trying to do something evil with their access. I just do not trust the programs they are going to run. So, what are your recommendations for establishing such a scenario? Partial solution I already came up with the following plan: Add chrooted SSH users for my friends Add Apache vhosts per user (point the directories to subdirectories of the homedirectories, i.e. /home/alice/example.com, /home/bob/example.net, etc. But how can I enforce a chroot-like environment for the scripts they are running within these vhosts? Any pointers would be appreciated.

    Read the article

  • EMERGENCY! Update Statement for critical mysql production database now running for 18 hours, need help.

    - by Tim
    We have a table with 500 million rows. Unfortunately, one of the columns was int(11), which is a signed int, and it was an incrementing value that just rolled over the 2.1 billion magic number. This immediately caused downtime for about 10.000 users. We discussed many solutions, and decided that we could just roll back this value safely, by say, a billion. But we had to roll it back for every row. Here is what we did: update Table1 Set MessageId = case when MessageId < 1073741824 then 0 else MessageId - 1073741824 end; I tested this on a table with 10 million rows and it took 11 minutes. So I assumed the larger table would take 550 minutes, or 9 hours. This was going to be our biggest downtime in 3 years. (We're a startup). It's now going on 18 hours. What should we do? Please don't say what we should have done. I think we should have updated a few million rows at a time. Is there a way we can see progress? Could Mysql have hung? Using mysql 5.0.22. Thanks!

    Read the article

  • apache front-end rewriting URL to different https ports?

    - by khedron
    Hi all, One of my users is having some trouble with forwarding to an internal web app from a public address. Everything worked fine for him when the situation was like this: front page: http://www.myexample.com/ public ref to internal app: http://www.example.com/app-8903/app.html secretly goes to: http://secret.example.com:8903/app-8903/app.html This is to say, my user is providing the very last URL, with the port information duplicated in the URL base, and they were using that to give a public face that hid both the port and the internal machine name. You could still read the port in the URL base if you looked, but the obvious reference and machine name were hidden. Doing it this way, he could have several different instances of the application running on secret.example.com with different ports, and on the front end it just looked like it was changing the URL directory/base. Now the user wants to do the same thing over https:, and the people helping him with apache config say it can't be done. Is that so? Without being there to tinker with the configuration myself, I'm not sure what his IT people have tried, but reading through the apache2 SSL FAQ and other docs, it seems like it should be possible to rewrite URLs to different ports and still use https:.

    Read the article

  • Providing access to a no-www website in an active directory environment

    - by oasisbob
    Our website is hosted externally, off our network. The canonical URL is a is intentionally lacking www, and will 301 redirect any requests containing www to the canonical URL. So far, so good. The problem is providing access to the website from within our LAN. In theory, the answer is simple: add a host record in DNS pointing foobarco.org to the external webhost. (eg foobarco.org -- 203.0.113.7) However, Our active directory domain is the same as our public website (foobarco.org), and AD appears to periodically auto-create host (A) records in the domain root corresponding to our domain controllers. This causes obvious problems: users on the LAN attempting to access the website resolve the domain controllers instead. As a stop-gap measure we're overriding DNS using the hosts file on clients, but this is a quick hack that doesn't scale well. The hosts-file hack hasn't broken anything obvious, so I doubt that this behavior is essential to AD operations, but I haven't found a way to disable it. Is it possible to override this behavior?

    Read the article

  • Is there a simple context-menu add-in that could make-up for the Windows-7 status bar deficiency?

    - by DanO
    Edit: I initially asked about free disk space and selected item size. It has since been pointed out that the selected item size summary is still availiable natively in the details pane. I had read elsewhere (wikipedia) that this was removed along with disk free space, which is not the case. Only free disk space has been completely removed. Selection size is still availiable. Is there a context menu add-in out there that could show the free disk space of the relevant drive, when you right click? This would go a long way to compensating for one of the only steps backward I’ve discovered in Windows 7 so far. I doubt anyone had created one specifially for this need before windows 7 because this information was previously easily accessible in the status bar. I thought about creating one, but it has been a while since I have messed with the Shell API, and I know there are coders out there who could do it faster and better. If you’ve heard of one, or know of something else to make-up for this Microsoft misstep, I’d appreciate hearing about it. If MS were listing to the community they would already have a powertoy or add-in of some kind to un-break this. (they could release it unsupported even), as there seem to be many power users that are extremely annoyed by this feature removal decision. If anyone has seen something, please post it here. As it has been only 4 days since official Windows 7 release, I'll wait at least a week to chose an answer. Here's a picture of protoype screenshot: SU question 19232 is related.

    Read the article

  • Jabber client for windows 7

    - by Anders
    I am looking for a jabber client with some specific functions. I have spent 1½ day looking for one and it is getting tiresome. Clients that I have been using, that have what I need, but I am not interesting for a reason in are: Pidgin, does not show complete messages in their popups. Miranda IM, I have a constant disconnect issue that does not seem to be resolved in my case. What I need are: Popups A popup that shows broadcasts to users. A popup that show when my username is typed in a conference chat. I need to be able to view the full message in the popup. No configuration of a theme to make this enabled, or if there is a working theme for it already. Preferable placement is on the top right of the screen. Able to 'popup' when running full screen applications, much like games. Conferences Easy access to bookmarked conferences. I do not want to go through submenus to rejoin a disconnected or closed conference. If I close the conference window I want to be connected to the conference until I exit the client. Tabbed interface. Configuration Sober configurations, options are great but there is a limit and the above needs to be availble in the options in a understandable manner. What I wish for: MSN Not needed! If it is avaible then it is a big plus. Facebook Not needed! If it is avaible then it is a big plus. Conferences/chats Not needed! Eyecandy is always nice.

    Read the article

  • Apache Virtual Host Issue

    - by Nik
    I think I hate Apache now, but on with the issue. It might be a configuration error on my end or just my inability to see what's right in front of me, but I'm trying to configure a sub-domain in Apache and no matter what, it always redirects the sub-domain to the web root of the main domain. My configuration is posted below (and yes, the domain name information was purposefully modified): <VirtualHost *> DocumentRoot /var/www/root/ ServerName example.com <Directory /var/www/root/> allow from all Options +Indexes </Directory> </VirtualHost> <Directory /usr/share/squirrelmail> Options Indexes FollowSymLinks <IfModule mod_php5.c> php_flag register_globals off </IfModule> <IfModule mod_dir.c> DirectoryIndex index.php </IfModule> # access to configtest is limited by default to prevent information leak <Files configtest.php> order deny,allow deny from all allow from 127.0.0.1 </Files> </Directory> # users will prefer a simple URL like http://webmail.example.com <VirtualHost *> DocumentRoot /usr/share/squirrelmail/ ServerName squirrelmail.example.com </VirtualHost>

    Read the article

  • File upload folder permission fastCGI - How to make it writeable?

    - by user6595
    I am using centos 5.7 with cPanel WHM running fastcgi/suEXEC I am trying to make a particular folder writable to allow a script to upload files but seem to be having problems. The folder (and all recursive folders) I want to be writable is: /home/mydomain/public_html/uploads And I want only scripts run by the user "songbanc" to be able to write to this directory. I have tried the following: chown -R songbanc /home/mydomain/public_html/uploads chmod -R 755 /home/mydomain/public_html/uploads But it still doesn't seem to work. The script will only upload files if I set the permissions manually via FTP client to 777. I assume I am misunderstanding how to set permission for users with fastcgi and hopefully someone can help me. Thanks in advance EDIT: Running getfacl on one of the scripts or folders gives the following: # file: home/mydomain/public_html/ripples/1.jpg # owner: songbanc # group: songbanc So it appears that the owner is correct? I'm now totally confused! EDIT 2: The plot thickens... lsattr and chattr are returning Inappropriate ioctl for device While reading flags on...

    Read the article

  • Folder Sharing NTFS permissions with Share Permission

    - by Muhammad Adly
    i have a problem on my domain, the history starting from when i had a server with WIN 2008 r2 installed with the following roles installed on it (AD, DNS, DHCP, File). From 1 month i decided to install a new server 2008 r2 server to get (AD, DNS, DHCP) and leave the file server on the old one. i did the following exactly: 1) robocopy all my data on external HDD 2) Install a new server with 2008 r2 3) transfer all 5 roles to transfer the domain to the new server (MainDC) 4) issue (NETLOGON, SYSVOL) not transferred but i decided to reinitialize them again an now they are operating (MainDC) 5) re-create and re-configure a new GPOs and link it to my OUs 6) reinstall Old server operating system with a fresh installation of WIN 2008 R2 (FileServer) 7) join my domain with my domain credentials. the issue when i tried to share folder on \fileserver the permissions that i had set in sharing permissions are applied on the main shared folder and subfolders. the security settings are not applied. i.e. Say i'm sharing \fileserver\MainFolder with sharing permission for Authenticated Users that can read, so every one can read this main shared folder, if i set security permission for \fileserver\MainFolder\User1 that User1 can Read\Write\Modify. User1 can not perform this processes when accessing it from Network Share, i tried alot of steps from topics online get ownership for folder, remove inheritance from parent folder, applying changes for child objects, i tried also to construct a new folder structure but also the same issue, i tried another host PC, also i get the same issue.

    Read the article

  • How to store Movies on a separate volume from the iTunes media folder?

    - by Manca Weeks
    I have a rather enormous Music collection. The music itself is approaching the 1TB mark. I am storing that on an external drive already. My iTunes library files are in their default location (/Users/me/Music/iTunes). My iTunes media folder is on an external drive /Volumes/iTunes/iTunes Music This has been working as expected. Now I would like to store just the contents of the Movies folder in the iTunes media folder on a separate drive. Apparently, iTunes doesn't like aliases or symlinks. I saw somewhere that one could mount a volume in a different directory than the default /Volumes. I would like to permanently mount my new Movies volume in the directory /Volumes/iTunes/iTunes Music/Movies. I know there is a command to do this, but how does one configure Mac OS 10.6.4 to always automatically mount that volume in this directory? I hope someone can enlighten me... If I find a solution, I can finally import all my movies into iTunes and be able to search them and stuff - it would be a dream. Thanks, M

    Read the article

  • Windows 7 permissions and Samba domain

    - by Nimzo
    I'm the main desktop support in an office of mixed MacOS and WinXp machines, about 60. I'm new to Windows 7. Currently our XP users are admins on their own machines, and my boss is wanting us to get away from that now that we're going to Windows 7 (64bit). My boss is largely absent from my day-to-day work, so I'm here looking for help =) I have numerous unattended .cmd scripts that we run from a server share, unattended software installs. Some run at login, some have to be run manually. With the NetworkAdmin account logged on to the computer, I am able to run the .cmd files and install stuff just fine. With my test account logged on (Power User), I cannot run the .cmd file - I get an Access Denied. When I change my test account to an Admin on the machine, I still get access denied. However, the test account can simply double-click the .exe and install the software just fine, as admin. Power User can't install anything. How do I fix it so that any power user or admin on the machine can run anything as long as it's on our shared software drive?

    Read the article

  • Is Google tracking our web history even when we do not visit Google or if affiliate websites?

    - by Anoyon-12
    I have recently updated to Google Chrome 29.0.1547.76. And when I click a the new tab button there is a new homepage now. Ok. My current settings forbid from third party cookies being set. And I clear all of my browsing data every time I close or well if its been too long browsing. Ok so there is this help dialog that appears first time you open the new tab page. What I did was Cleared all my browsing settings (From Beginning of time) and then again I went to the new tab, the Help dialog appears. And for the third time I did the same and the Same thing Happened. So For the fourth time I cleared all my Browsing data. Clicked open new tab and then. navigate myself to chrome://settings/cookies. and there were So does this mean google is tracking our web History just for using Chrome. I know Its not Illegal because these cookies only appear when you click the new tab Google Chrome 29.0.1547.76. maybe that was the reason google redesigned the entire New:tab page. From this Google is forcing us to allow them track us. I don't want to set another page as my new:tab page. I just want the old one. Google has a long History of invading Privacy without users consent. There was that Safari incident. I am sure you people remember. So can anyone tell me about this issue? I maybe wrong. So please explain.

    Read the article

  • SSH with public/private key to iMac fails.

    - by bennedich
    I'm trying to connect to my iMac (server) from my macbook (client) on my LAN. Both have Mac OS X 10.6.4. Server running on a new clean install of the OS. When just activating Remote Login in System Preferences everything works fine. But when setting up ssh to only work with public/private key I get the following error messages from the server log depending on if I use a rsa passphrase or not: With passphrase (case 1): PAM: user account has expired for <myServerUserName> from 192.168.X.X via 192.168.X.Y Without passphrase (case 2): Failed publickey for <myServerUserName> from 192.168.X.X port AAAAA ssh2 This is my setup algorithm: Create a private and public key on client with command ssh-keygen -t rsa. In case 1 I also set a passphrase. Move the id_rsa.pub to the server path /Users/<myServerUserName>/.ssh/ In this folder I execute cat id_rsa.pub > authorized_keys Making sure Remote Login isn't active, I now execute sudo /usr/sbin/sshd -d on the server. Back on the client I now type ssh -v -v -v <myServerUserName>@192.168.X.Y and get prompted to accept RSA key fingerprint. This is NOT the same fingerprint as the one from when I created the private/public key (should it be?). I accept. Depending on case: CASE 1: Client gets halted for password and the response is permission denied even though correct password is given. Back on the server I can read the error message I stated above for case 1: PAM: user account has expired... CASE 2: Client gets message Connection closed by 192.168.X.Y. Back on the server I can read the error message I stated above for case 2: Failed publickey... What could possibly cause this?

    Read the article

  • Apache form authentication issues

    - by rfcoder89
    I am trying to authenticate users through Apache using the form authentication method to restrict https requests to a certain folder. Although, regardless of whether the correct login details are provided it keeps reloading the same page except the url has the form values embedded in it instead of redirecting to the appropriate page. I need to use the form authentication type instead of basic so I can write my own html for the user to login. I am using Apache 2.4.9 and this is our current configuration. Apache config file <Location C:/wamp/www/directory> SetHandler form-login-handler AuthFormLoginRequiredLocation https://localNetwork.com/username/TestBed/HTML/login.html AuthFormLoginSuccessLocation https://localNetwork.com/username/TestBed/HTML/test.html AuthFormProvider file AuthUserFile "C:/wamp/passwords" AuthType form AuthName realm Session On SessionCookieName session path=/ SessionCryptoPassphrase secret </Location> And in the login html page I've added that for the user to login <form method="POST" action="/test.html"> User: <input type="text" name="httpd_username" value="" /> Pass: <input type="password" name="httpd_password" value="" /> <input type="submit" name="login" value="Login" /> </form>

    Read the article

  • How can I change the default program installation directory in Windows 7?

    - by Max
    Windows 7 is installed on my C drive, which is quite small. I am very tired of instructing new programs to put their files on my larger D drive during installation; I would like to change the default drive. This article says that you can use a registry hack, but I am giving Microsoft the benefit of the doubt and naively assuming that a configuration option exists somewhere. It's 2010... do I really have to hack my registry to make a simple tweak like this? Also, there's a ServerFault question that explains how to move the "Users" directory and create a symlink, which could also work. However, at the moment I have some apps in C:\Program Files, some apps in C:\Program Files (x86), and some apps in the corresponding folders on D:\, so it would be a hassle. Also, my small OS boot drive is a 10k RPM WD Raptor, and I feel like that probably gives a speed boost to apps installed on it that need to read & write to their directories a bunch. I wonder if it actually matters.

    Read the article

  • Ubuntu server 10.04 disconnects after short periods of inactivity on my site

    - by Melot
    Hi! I'm new to Ubuntu (installed it for the first time just a couple of days ago on my server). I've Ubuntu Server 10.04 and am just using the terminal, no GUI like Gnome. So far it's working pretty great except for one big thing. Whenever I go to sleep and there's no activity on my server (it's not a big site so active users drop to 0 during the night), the server kind of disconnects. The only thing that can bring the site back online is to restart the whole server. I've tried disabling powersaving by using setterm but that changes nothing. Even if I wake up the server by pressing any key or so the site wont go back online! I've tried just restarting both Apache and MySQL (I'm using LAMP-server btw) but not even that works. But as soon as I turn the power off and on at the server, everythings work like normal for a couple of minutes of inactivity (~5-15 minutes I'd guess) and then it's down again unless someone logs in to the site and is active. I was previously using XAMPP on my laptop with Windows XP and that worked 24/7 so I don't think it's anything with my router or ISP. This is driving me crazy! My site is down all the time I'm in school as I have no possibility to restart the server if it becomes offline. Does anyone have a clue to what could be wrong?

    Read the article

  • Oracle: 1 Large Server vs. 2 Smaller Servers?

    - by nvahalik
    We are in the planning stages of setting up our production Oracle 10gR2 environment. Our budget gives us the ability to buy 2 processor licenses of Oracle DB Standard Edition. We have minimal experience with Oracle so I'll defer to anyone who has used it. We are trying to decide if we should set up a single dual quad-core box or 2 individual quad-core boxes in a RAC configuration. Our DB right now is about 60 GB, and at our peak, we'll have up to 150 concurrent users. Most of the big stuff is done via batch processing at night. My gut tells me that having 2 boxes in a RAC configuration can't be a bad thing because it provides a true hardware failover solution. DB stored in a shared LUN on a SAN via iSCSI. Plus if we ever need to add capacity, we already have boxes in place that can be upgraded with extra procs (I assume with zero downtime, since it's set up in a RAC config) if we add extra licenses, or RAM. Does RAC have any performance penalties? Will it add extra latency? Is there any true advantage for having dual processor boxes running these systems? If we build out the Oracle boxes with special hardware: hardware iSCSI cards, TOE NICs, will these boxes be solid? We are deploying on 64-bit Windows. So what would you do? One box or two?

    Read the article

  • How to get apache to look for files in different subfolders folders?

    - by prb
    I am definitely new to mod-rewrite stuff. Note:- here the URL is common, and all the folders and subfolders on same host. The url a user uses to access their page is http://myurl.com/1234/filename.jpg Here the name of the subfolder is an integer is unique and generated dynamically by another application. The subfolder stores images specific to an individual user. So the folder structure is as follows main1 = document root main2 is another folder within main1 or document root. /main1/1234/filename.jpg /main1/5678/filename.jpg /main1/2345/filename.jpg /main1/1212/filename.jpg /main1/main2/2367/filename.jpg /main1/main2/8790/filename.jpg /main1/main2/9966/filename.jpg So, I want to write a rewrite a rule so that if a user tries to type in http://myurl.com/1234/filename.jpg, the rewrite rule will need to look where the file is and serve the request; so for request http:/myurl.com/1234/filename.jpg the actual page is located at /main1/1234/filename.jpg and then need to serve that page from that folder. So, if another users makes a request as http://myurl.com/9966/filename.jpg, it should serve the page from the following destination /main1/main2/9966/filename.jpg Please let me know if the question is still not clear. This is what i have done so far and does not work at all. RewriteCond {DOCUMENT_ROOT}/%{REQUEST_FILENAME} -f RewriteRule ^(.*)$ {DOCUMENT_ROOT}/$1 [L] RewriteCond {DOCUMENT_ROOT}/main2/%{REQUEST_FILENAME} -f RewriteRule ^(.*)$ {DOCUMENT_ROOT}/main2/$1 [L] any help is really grateful

    Read the article

  • What method of MySQL mirroring should I use for this?

    - by user45745
    I'm running an web application hosting service (basically hosting forums for free), and I have two remote servers at my disposal. The code for the application is stored on both servers and isn't a problem, but I'm wondering how to deal with the databases. When someone goes onto a site *.example-host.com, they are sent to one of the two servers and both must be capable of loading the forums from a database. The database must also have write access, for when new members register or post topics etc. The main requirement is speed, but uptime is also important (if a server goes out, the site should still work). I have a few options, but I'm inexperienced and not sure which to go with: 1) [PHP] Split the forum records 50:50 between the two servers. If a server does not have the record for a forum requested, it can request it from the other by remote MySQL and load it. This idea sounded okay, until I realised that 50% of the time, users would be waiting significantly longer for pages to load. I also realised that if one of the servers went down, half the forums would be inaccessible and registrations would have to be disabled. 2) [MySQL] Dual master replication. This would attempt to mirror the two databases and sounds perfect, but I've heard that it can be very problematic. I don't know how fast this is. 3) [MySQL] Use a standard replication, distribute read only queries on both nodes and read/write queries to the master. This sounds like a good option, but again, I'm not sure on speed. I also don't know what would happen if the master server went down. If you have any other suggestions, please post them :)

    Read the article

  • Exchange 2003 inbound routing issue

    - by user565712
    Just recently we started experiencing inbound routing issues. Email adddressed to [email protected] is intermittantly translated to [email protected]. This is happening for several users and, as stated, is intermittant. I don't know where to start looking for the solution. Is this an Exchange issue? A DNS issue? We have a single Exchange server inside our network with an FQDN of server.domain.local with a single SMTP Virtual Server. The Advanced properties of the Delivery tab of the Virt Server has an empty Masquerade Domain textbox and the value for the FDQN text-box is set to the domain itself, domain.com. The DNS record for domain.com is a CNAME entry referencing www.domain.com. Is this somehow related to the problem? I checked the headers of the inbound messages that generated NDRs as a result of being sent to [email protected] and nowhere in the header is www.domain.com mentioned. To make my life even more difficult, we use Postini as a third-party SPAM filtering service. Our MX records point to the Postini servers and Postini delivers the messages to our server. Perhaps it is Postini that is mucking things up? sigh I'm having trouble with this one and the intermittent aspect is making it that much more difficult for me. Any ideas?

    Read the article

  • How do I remove a printer connection without the user's intervention?

    - by 1.618
    Here's the situation: We're replacing 11 printers with newer models, and we'll be installing them on our print server and sharing them out. The plan is to share the new printers under different names than the ones they're replacing, and un-share the old ones. So I need to come up with a way to remove the client connections to old printers automatically. Clients are mostly Windows 7 with a few XP. My first idea was to call prnmngr.vbs from the login script to remove each old printer explicitly by name. The problem is that some users don't log out when they're done for the day, so I can't count on their login script running before they next need to print. I could remotely run prnmngr.vbs using SCCM, but if it's not 'impersonating' the user, I don't think it will remove their printers. Any ideas? Could I lookup how to access WMI using c# code and write a "trojan" to remove specific printers without requiring the user to do anything? (I'm only half joking). I'm open to any suggestion! Thanks!

    Read the article

< Previous Page | 814 815 816 817 818 819 820 821 822 823 824 825  | Next Page >