Search Results

Search found 12089 results on 484 pages for 'rule of three'.

Page 360/484 | < Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >

  • Installing WindowsAuthentication breaks authentication / web.config?

    - by Ian Quigley
    I have a clean Windows 2008 R2 box (on a VM) and have installed IIS 7.5 with default options. I then copied a website to it (from Windows 7, IIS 7) and after a little tweaking the website is working fine. The website is currently using and working with Anonymous Authentication. I have gone back to the Windows Components/Sever Manager, Roles - Security and ticked and installed Windows Authentication. When I check my server in IIS (top level above sites) - Authentication, I see Anonymous Authentication (enabled) ASP.NET Impersonation (disabled) Forms Authentication (disbaled) Windows Authentication (enabled) When I check my default website - Authentication, I see as above but "Retrieving status" and an error dialog saying There was an error while performing this operation. Details: Filename c:\inetpub\wwwroot\screwturnwiki\web.config Line number: 96 Error: This configuration section cannot be used in this path. This happens when the section is being locked at the parent level. Locking is either by default (overriderModeDefault="Deny"), or set explicity by a location tag with overrideMode="Deny" or the legacy allowOverride="False". I have tried hand editing the web.config with no success. UN-installing Windows Authentication happily returns my site to working with Anonymous Authentication, and allows me to enable/disable these three options. FYI. I am using ScrewTurnWiki with the Active Directory plug in.

    Read the article

  • Libraries merged folder views

    - by Stigma
    So I pretty much love the Windows 7 Libraries feature, and saw one use for them that I thought would be perfect, but I can't seem to manage it. Basically, a merged view of different folder structures. Suppose I make a new generic library and add three locations to it: C:\Test\, D:\Test\ and D:\temp\Test\. Now, these may look somewhat okay as long as there are no duplicates in these folders. (It wants to group them based on the included directory, which one can work around by looking on google - I don't have the precise trick on hand I am afraid.) But when you get collisions and, say, two of those directories have a Sub directory in them, stuff becomes unusable (assuming Arrange by: Folder view). You'll have multiple folders listed named Sub, which is pretty useless when looking for data. I want folders to get 'merged', which ought to be possible somehow since it can create these merged views based on artist, album etc in other views. So all subdirectories that are double (and recursively checking for doubles inside those, etc) ought to be merged for as far the View is concerned. If files have a collision, I don't really care what happens - hide one, show both, filter out duplicates, whatever. (Although an option would be nice...) Anyhow, is there anyone who knows how to get such a 'merged folder structure' functionality for Libraries? It would be really useful for me.

    Read the article

  • Iframe pages on Facebook does not show in Internet Explorer 9 - Windows 7 64-bit

    - by Morten
    Have this very irritating problem with Internet Explorer 9 and Facebook. If I go to Facebook and watch a page with iframes (like IFBML pages) it will not show up in Internet Explorer 9. It shows up in Firefox 4 and Chrome 10, but not in Internet Explorer 9. I run Windows 7 64-bit SP1 (danish). The strange thing is that I own three different PC´s and they all run Windows 64-bit SP1 and all of them has this issue. Can´t figure out what causes this issue. I have tried the following: Uninstalled AVG antivirus and installed Microsoft Antivirus - no change Updated Windows with SP1 - no change Updated from Internet Explorer 9 beta to Internet Explorer 9 final Ed. - no change Emptied cache and temp files in Internet Explorer 9 - no change Made www.facebook.com a trusted site in Internet Explorer 9 - no change And a lot of other things I can not remember I guess....but nothing seems to work. As I´m using quite a lot of my working time developing Facebook Fanpages it is frustrating not to be able to test them in Internet Explorer 9. BTW - it is Internet Explorer 9 32-bit - not 64-bit. Any clues?

    Read the article

  • Exchange 2010 DAG + VMWare HA = no support?

    - by Dan
    We currently have an Exchange 2003 clustered environment (two machine cluster) that we're looking to upgrade to 2010. We recently purchased a VMWare virtualization environment (three Dell R710's with an EMC NS-120 serving up NFS datastores - iSCSI is available) that we wish to use for this new environment. I'm seeing that Microsoft does not support Exchange 2010 DAGs with a virtualization high availability solution (see links below). I would like to utilize the DAG to ensure the data stays available if one host goes down, and HA to ensure that if the physical host goes down, the VM will come back up on the other available host. Does anybody know why MS does not support this? VMWare HA will only restart the VM if it is hung/down - I don't see any difference between this and restarting the physical box if someone pulled the power... Will we only run into issues with support if it has something to do with HA/DAG failover or will they see we have HA and tell us to put it on a physical box even if it has nothing to do with HA? If we disable HA for these VM's will that satisfy them on a support case? Has anybody set up an Exchange 2010 DAG on VMware with HA enabled? Will they have any issues with using an NFS datastore? We have much greater flexibility on the EMC with NFS vs iSCSI, so I would prefer to continue utilizing that. Thanks for any input! http://www.vmwareinfo.com/2010/01/verifying-microsoft-exchange-2010.html Take a look at the second image under "Not Supported" http://technet.microsoft.com/en-us/library/aa996719.aspx "Microsoft doesn't support combining Exchange high availability solutions (database availability groups (DAGs)) with hypervisor-based clustering, high availability, or migration solutions. DAGs are supported in hardware virtualization environments provided that the virtualization environment doesn't employ clustered root servers."

    Read the article

  • Vagrant doesn't detect chef-solo unless re-installed

    - by nightowl
    I am using Vagrant to test my Chef recipes in Amazon AWS, and I am encountering an irritating issue: I initially assumed that Vagrant would install chef itself (as it does when using Virtual Box as the provider) but it seems that this needs to be done using the cloud-init script. However, even after I successfully installed the chef gem via cloud-init I was still getting the following error: The chef binary (eitherchef-soloorchef-client) was not found A quick google of this error suggested three probable causes: Chef had failed to install It had installed, but the directory was not in the $PATH environment variable It had installed and in the $PATH but with incorrect permissions I logged in and double checked; chef-solo and chef-client were installed; The path variable for the user, sudo and root all included /usr/local/bin and permissions were all fine. I managed to solve this problem by uninstalling and reinstalling the gem using sudo gem install chef. I don't understand why this should resolve the issue and it is a bit of a problem if I have to ssh into a test box and manually install the gem every time. Does anyone have any suggestions why this might be happening?

    Read the article

  • Virtual machines with failover setup

    - by kimmmo
    We have three servers and our plan is to run a number of virtual machines on them in such manner, that if one of the nodes blow up, we can either quickly or seamlessly get a spare running on another node. In addition to the normal networking, they're interconnected via dual 10Gbit NIC's, so networked raid/mirroring shouldn't be a problem. The guest VM's are mostly going to be running text mode linux, but of course it wouldn't hurt to be able to spin up a non-mission critical windows guest for running Visual Studio or checking IE compatibility of a web app. We've spent some time trying to get some magical cloud setup running using Stackops and Crowbar but it started to look like they were offering way too much and were too complicated for our needs. The next candidate, I think, is Ubuntu 11.04 server + KVM + Ganeti + Drbd, unless you can come up with a suggestion for a better solution that we have missed. Requirements: Installation should be simple or at least understandable without being in the dev team A browser interface for creating and managing VM's is a nice bonus Single node's hardware failure should cause minimal downtime for VM's that were running on that node Adding more nodes should be possible without shutting down the VM's.

    Read the article

  • Dell OpenManage On Ubuntu Server 12.04 Cannot Log In

    - by Austin
    I have a Dell Poweredge 2950 with 2X130GB and 2X2TB drives. I need to set them up in a RAID 1 array so that the 130GB Drives are mirrored and host the OS, while the 2TB drives are mirrored and are the content drives. So I go from 4 disks, down to two, one 130GB and one 2TB. I can do that in the BIOS RAID utility no problem. But I need to be able to manage the RAID arrays and be able to expand them WITHOUT shutting down the server. Now, to my understanding, openmanage will allow me to do that AND it runs on ubuntu. So I go and set it up and try to log into the web interface at and it will not let me log in. I have followed dell's guide to set up openmanage, even added the usernames to the files and permissions and such, however, cannot get it to let me log in or anything. I have reinstalled Openmanage several times, even reinstalled the OS three times, and nothing works. Google does not help either. It simply says login failed after hitting submit. Please Help

    Read the article

  • tod to avi mpg wmv, convert tod (.mpeg-2) to avi mpg wmv for Movie Maker.

    - by yearofhao
    Need to convert .tod (mpeg-2) to avi mpg wmv download from JVC Everio to PC with tod to avi mpg wmv converter convert tod to uncompressed/raw avi, mpg wmv Have a JVC Everio camcorder? Then you may encounter problems when saving the .tod files to your computer windows movie maker says it can't recognize and edit them to make videos. You may play them using media player but the problem is how to edit them? The bundled software Power cinema could be annoying, since you can only edit when the camera is plugged in to the PC - Power cinema can’t seem to edit from the saved clips alone. So, how do you save them to PC so that you can edit them without the camera and also using windows movie maker? JVC Everio Tod to avi mpg wmv converter costs you a penny to but help you perfectly convert tod file to AVI, MPG, WMV, YouTube FLV, MP4, DV, QuickTime.MOV or other common video formats with fast speed and while keeping the original HD quality. High definition TOD recordings from JVC Camcorders can playback fluidly, convert smoothly and edit professionally on with iOrgsoft TOD file converter iOrgsoft tod to avi mpg wmv Converter has been mostly used by Windows users who use Windows 7 or Vista, after the .tod (mpeg-2 the same codec) downloaded from JVC Everio to PC, it’s best to convert tod to avi, convert to xvid divx, convert tod to uncompressed avi or convert tod to raw avi, tod to mpg, tod to wmv, which are three Windows movie maker best formats to import. TOD to avi mpg wmv converter is a competent video-editing program that allows you to clip/cut TOD video clip, crop the video to encode, and help transfers video to devices like iPhone, iPod, to HDTV connected with Apple TV.

    Read the article

  • my X server doesn't load a module called "glx", but my video drivers seem to be installed

    - by rumtscho
    I just got a new, very wide monitor (2560x1440) and there is no sense maximizing my applications. So I installed Compiz Config Settings Manager and enabled Grid. Nothing happened, the shortcuts don't move application windows. Went to System - Preferences - Appearance, the Visual effects tab. It at "disabled". When I try to set them to "normal" or "extra", a message box appears telling me that it's searching for video drivers, then disappears, and I get an error message "Desktop effects could not be enabled". I opened Xorg.0.log, and had errors there: (EE) Failed to load /usr/lib/xorg/modules/extensions//libglx.so (II) UnloadModule: "glx" (EE) Failed to load module "glx" (loader failed, 7) (II) LoadModule: "extmod" Going to Administration - System - Hardware drivers, it said that there are no available and/or installed hardware drivers. But apt-get said that it cannot install nvidia-glx-185, as it is already installed. Googling my error message suggested that I install and run something called envyng. This let me install the nvidia drivers again, and now I can see in the Hardware Drivers window that they are installed and active. But the error message in Xorg.0.log remains, and I still cannot enable the Compiz effects or use Grid. Now, I don't have enough Linux experience to understand if this is a single cause-effect-chain of problems, or three independent ones, but I'd appreciate help for any of them. I am running Ubuntu 9.10, the video card is a GeForce 7600GS.

    Read the article

  • Microsoft Application Request Routing with Windows Authentication

    - by theplatz
    I'm running into a problem trying to get Windows Authentication working in an environment that uses Microsoft Application Request Routing and was hoping someone might be able to help. The problem I'm running into is that only some requests are authenticated, while others fail with 401 errors. I have followed the Special Case of Running IIS 7.0 in a Web Farm instructions found at http://blogs.msdn.com/b/webtopics/archive/2009/01/19/service-principal-name-spn-checklist-for-kerberos-authentication-with-iis-7-0.aspx to no avail. My current server setup looks like the following: ARR Two servers set up with IIS shared configuration using IIS 7.5 on Windows 2008 R2 Anonymous authentication turned on for the Default Web Site Web Farm Two servers running IIS 7.5 on Windows 2008 R2 Three web sites set up using port binding to differentiate between virtual hosts. Ports being used are 8000, 8001, and 8002 Application pools for Windows Authentication all use a common domain account SPN added to domain account for http/<virthalhost-name>:<port-number> and http/<virtualhost-name>.<fully-qualified-domain>:<port-number> The IIS logs show the following when authentication is working/failing. If I understand correctly, all requests should show DOMAIN\User_Name: 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/stylesheets/techweb.landing.css - 8002 DOMAIN\User_Name ARR-HOST-1-IP-ADDRESS 200 0 0 62 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-left.gif - 8002 DOMAIN\User_Name ARR-HOST-IP-ADDRESS 200 0 0 31 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 2 5 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/application-icon.png - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 2148074248 0 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/user-background-right.gif - 8002 - ARR-HOST-1-IP-ADDRESS 401 1 3221225581 15 2012-11-19 15:03:17 CLUSTER-IP-ADDRESS GET /home/images/building.gif - 8002 DOMAIN\User_Name ARR-HOST-2-IP-ADDRESS 200 0 0 218 Does anyone know what might cause this problem and how I can resolve it?

    Read the article

  • Vista laptop 1 connects to network but Vista laptop 2 doesn't

    - by Jaips
    (Sorry if this is a dulipcate. I did a hunt but couldn't find matching question) We recently got a new router for our network (thomson TG585v8). It was already pre-configured by the ISP and was easy to setup. Both Vista laptops in this home network connected without trouble as well a iPod touch, all via wifi. In the last two days one of the laptops has been unable to connect cleanly to the network (connects as unidentified network). The other laptop, and iPod have not had any issues. I can't think of anything that has changed to make this happen now. I have rebooted the laptop and the router. I have updated the laptop wireless driver I have checked the laptop is set to automatically get IP I have logged into the router which correctly identifies all three wireless devices. The problem laptop connects via LAN without issue. Some things that may or may not matter: The problem laptop also sometimes uses a 3G dongle Both laptops use Windows Live Mesh to sync folders Laptop is usually set to go to sleep NB: it seems this is an intermittent issue, after an hour (since i noticed it) it seems to have fixed itself. Would love ideas though to solve this problem for good.

    Read the article

  • Excel 2007 pivot table does not aggregate properly

    - by Patrick
    I am using a an excel pivot table to summarize some data and just found a problem. The problem deals with how aggregate values are calculated. Let's say I have a table of data with three columns: Name, Date, Value. If I create a table where Name and then Date are used as Row Labels and Value is the aggregate value, ie Average. The pivot table will look something like this: +John .3450 5/14/2010 1.234 5/15/2010 3.450 5/16/2010 -3.25 What I think should be happening here is that the values for each date are averaged and then those values are averaged to come up with the value in the same row as the Name, John. But that is not what it does. It takes the average for each date, which it shows across from the date, but then instead of taking the average of those numbers, it actually uses the raw data and computes the average for all of John's values. It should show the average of the daily averages to correspond with the tree hierarchy, but instead just shows me the average for all of John's values. It essential will only aggregate at one level, but visually creates sub levels that it is not using. Does anyone know how to change this or understand by what logic this makes sense? Why would I create any sub groupings if I cannot compute aggregates on them?

    Read the article

  • Excel concatenate strings from cells listed in third cell

    - by Puddingfox
    I have an excel 2007 workbook that has five columns: A. A list of machines B. A list of service numbers for each machine C. A list of service names for each machine ...(nothing here) I. A list of Service Numbers J. A list of Service Names Each machine listed in column A has one or more services running on it from the list in column J. I would like to be able to add services to a machine (i.e. updating the cell in Column C) by simply adding another comma-separated number to Column B. For Example, The first row would look like this assuming Machine1 has the first three services: | A | B | C | Machine1 | 1,2,3 | HTTP,HTTPS,DNS Right now I have to manually update the formula in column c for each change I make. The current formula is: =CONCATENATE(J1,",",J2,",",J3) I would like to use something like this (please forgive my syntax; I'm a coder and I'm treating cell B1 as if it is an indexed array): =CONCATENATE(CELL("J"+B1[0] , "," , "J"+B1[1] , "," "J"+B1[2]) Although having variable numbers of services makes this even more difficult. Is there any way of doing this. For reference, this is columns I and J: | I | J | 1 |HTTP | 2 |HTTPS | 3 |DNS ..... | 16 |Service16 I don't know very much about Excel so any help is greatly appreciated.

    Read the article

  • Relation between server_name in nginx sites-available, /etc/hosts file and A-records

    - by user2818584
    I have the following two server-blocks in my config-file in sites-available: server { listen 80; server_name www.mydomain.be; root /usr/share/nginx/html; index index.html index.htm; location / { try_files $uri $uri/ =404; } } server { listen 80; server_name sub.mydomain.be; root /usr/share/nginx/sub; index index.html index.htm; location / { try_files $uri $uri/ =404; } } I also created an A-record for both www.domain.be and sub.domain.be with the IP of my server as value. Yet, when I try to reload my nginx configuration with service nginx reload it fails. When I remove the second server-block, it reloads as expected. I know this topic is popular, and that there are loads of such [nginx][subdomain] questions here, but none of them seems to discuss explicitly how the following three things hang together: virtual hosts or server blocks in nginx (est. server_name matching) the effect of A-records on how nginx processes requests the need to add hosts to /etc/hosts Right now I have the impression that a lack of knowledge of this bigger picture, rather than specific knowledge of nginx configuration prevents me from making this work.

    Read the article

  • Configuring DNS and IIS for multiple domains on a single server

    - by RichardS
    I might be over complicating this but...I am hosting several websites and dns for the domains on a single server: domain1.net domain1.com domain2.net I have three items which I'm trying to work out whether to achieve by DNS, by IIS hostnames(bindings), or by IIS redirect. 1. Where I have domain1.net and domain1.com, I want everything from both (all emails and web requests) to just point to the domain1.net. Can I do this at the DNS level, or do I have to set up the email as forwarders on the email server and the domain as a hostname in IIS? For example: [email protected] [email protected] www.domain1.com www.domain1.net 2. I want to make sure that requests for domain1.net and www.domain1.net both resolve to the same place. Should this be done with DNS or with multiple hostnames, or with IIS redirects? 3. If I then want to have one webmail site serving all of domains (webmail.domain1.net, webmail.domain2.net), is it best to this with a cname in DNS or with host headers in IIS?

    Read the article

  • Can I recover a zpool after it's been exported, given that devices have not been reallocated?

    - by cali-spc
    I had a zpool we'll call 'testpool'. testpool had 3 devices included in it, and a single zfs called 'test'. I needed to move 'test' to a new, smaller pool. I wanted to name the new pool the same name 'testpool'. Basically did the following. zfs send testpool@backup > /tmp/test-dump zpool export -f testpool zpool create -f testpool newdevice zfs receive -F testpool < /tmp/test-dump Unfortunately I found out that the testpool@backup snapshot was the wrong snapshot. Too old. I have yet to reallocate the three devices that were in the OLD testpool. (None of these 3 devices are 'newdevice', they are a separate 3.) Is there any way I can recover data in those devices? I'm thinking since I named the new, smaller pool the same as the old zpool, I'm pretty much SOL. But if not, that would be nice to know. Edit: More info I did a 'zpool import' and got this. bash-3.00# zpool import pool: testpool id: 14781458723915654709 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: testpool ONLINE c5t8d0 ONLINE c5t9d0 ONLINE c5t10d0 ONLINE So I'm guessing I just need the syntax to import this zpool using its numeric identifier, while giving it a new name. S.

    Read the article

  • Unable to force Debian to do unattended install... libc6 wants interactive confirm

    - by JD Long
    I'm trying to create a script that forces a Debian Lenny install to install the latest version of CRAN R. During the install it appears libc6 is upgraded and the install wants interactive confirm that it's OK to restart three services (mysql, exim4, cron). This process HAS to be unattended as it runs on Amazon's Elastic Map Reduce (EMR) machines. But I'm running out of options. Here's a few things I've tried: This previous question appears to be exactly what I'm looking for. So I set up my install script as follows: # set my CRAN repos... yes, I know there's a new convention where to put these. echo "deb http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list echo "deb-src http://cran.r-project.org/bin/linux/debian lenny-cran/" | sudo tee -a /etc/apt/sources.list # set the dpkg.cfg options per the previous SuperUser question echo "force-confold" | sudo tee -a /etc/dpkg/dpkg.cfg echo "force-confdef" | sudo tee -a /etc/dpkg/dpkg.cfg export DEBIAN_FRONTEND=noninteractive # add key to keyring so it doesn't complain gpg --keyserver pgp.mit.edu --recv-key 381BA480 gpg -a --export 381BA480 > jranke_cran.asc sudo apt-key add jranke_cran.asc sudo apt-get update # install the latest R sudo apt-get install --yes --force-yes r-base But this script hangs with the following request for input: OK, so I tried stopping the services using the following script: sudo /etc/init.d/mysql stop sudo /etc/init.d/exim4 stop sudo /etc/init.d/cron stop sudo apt-get install --yes --force-yes libc6 This does not work and the interactive screen comes back, but this time with only cron listed as the service that must be restarted. So is there a way to make libc6 just restart these services with no user input? Or is there a way to stop cron so it does not cause an interactive prompt? Maybe a creative option I've never thought of? Keep in mind that this system is brought up, some Hadoop code is run, and then it's torn down. So I can put up with side effects and bad behavior that we might not want in a production desktop machine or web server.

    Read the article

  • Why does Windows make random "device connect" and "device disconnect" sounds?

    - by Steve Elmer
    Hello, I've been noticing this since Windows Vista. I see it on Windows 7, now, as well. In any case throughout the day I notice that my computer makes apparently random device-connect and/or device-disconnect ("boink") sounds. I suppose it is the same sound you hear when connecting or disconnecting a USB device such as a thumb drive. I've noticed that this happens on each of three computers I work with at home, my wife's computer, and my machine at work. It happens without any user action at all - i.e. I'll be just sitting there (hands off my mouse and keyboard), and the computer will make the sound. There is no visual queue or anything. Just the sound. I have sometimes gone in pursuit of the sound - running virus scans, examining event logs and such, and observing task manager - but have never had any luck tracking this thing down, but have not had any luck. Surely someone else out there must be experiencing this, too. Any ideas? Thanks, Steve

    Read the article

  • Why does Windows make random "device connect" and "device disconnect" sounds?

    - by Steve Elmer
    Hello, I've been noticing this since Windows Vista. I see it on Windows 7, now, as well. In any case throughout the day I notice that my computer makes apparently random device-connect and/or device-disconnect ("boink") sounds. I suppose it is the same sound you hear when connecting or disconnecting a USB device such as a thumb drive. I've noticed that this happens on each of three computers I work with at home, my wife's computer, and my machine at work. It happens without any user action at all - i.e. I'll be just sitting there (hands off my mouse and keyboard), and the computer will make the sound. There is no visual queue or anything. Just the sound. I have sometimes gone in pursuit of the sound - running virus scans, examining event logs and such, and observing task manager - but have never had any luck tracking this thing down, but have not had any luck. Surely someone else out there must be experiencing this, too. Any ideas? Thanks, Steve

    Read the article

  • How should I set up protection for the database against sql injection when all the php scripts are flawed?

    - by Tchalvak
    I've inherited a php web app that is very insecure, with a history of sql injection. I can't fix the scripts immediately, I rather need them to be running to have the website running, and there are too many php scripts to deal with from the php end first. I do, however, have full control over the server and the software on the server, including full control over the mysql database and it's users. Let's estimate it at something like 300 scripts overall, 40 semi-private scripts, and 20 private/secure scripts. So my question is how best to go about securing the data, with the implicit assumption that sql injection from the php side (e.g. somewhere in that list of 300 scripts) is inevitable? My first-draft plan is to create multiple tiers of different permissioned users in the mysql database. In this way I can secure the data & scripts in most need of securing first ("private/secure" category), then the second tier of database tables & scripts ("semi-private"), and finally deal with the security of the rest of the php app overall (with the result of finally securing the database tables that essentially deal with "public" information, e.g. stuff that even just viewing the homepage requires). So, 3 database users (public, semi-private, and secure), with a different user connecting for each of three different groups of scripts (the secure scripts, the semi-private scripts, and the public scripts). In this way, I can prevent all access to "secure" from "public" or from "semi-private", and to "semi-private" from "public". Are there other alternatives that I should look into? If a tiered access system is the way to go, what approaches are best?

    Read the article

  • How to burn a data DVD in Windows XP

    - by SabreWolfy
    I am trying to burn a data DVD (DVD+R) in Windows XP SP3 on a Dell desktop computer. The computer has a licensed copy of Nero 6.3. Nero indicates that an update to version 6.6 is available, but after following the link provided, it redirects me to the Nero website to purchase the upgrade. I'm not interested in doing this. After creating a project in Nero 6.3, inserting a blank DVD+R and trying to start burning the data DVD, Nero indicates that I should insert an appropriate disk into the drive. It does not seem to detect the blank DVD+R. I downloaded infrarecorder and cdrtfe from Sourceforge. Neither of these programs worked either. They both indicated that I should insert the correct media, with cdrtfe saying there is no disk in the drive. I tried with another blank DVD +R with the same effect. I inserted a CDR containing data into the drive and the Windows read read this CDR without a problem. I have no reason to believe that the drive is faulty. I am aware that Windows XP itself is not able to burn DVDs. However, it seems that three third-party software programs are not able to burn a data DVD in Window XP. The specifications provided in Nero indicate that DVD+R is compatible with the drive. How can I burn a backup data DVD in Windows XP?

    Read the article

  • CentOS Latency High Troubleshooting

    - by Sarah Weinberger
    I have two CentOS servers connected via a 10 Gb fiber optic cable with a network emulator connected between them. All three units sit on a desk in the lab. There is also a regular 1 Gbit Ethernet cable connected to each of the machines, which provide internet connectivity. When I set the latency to something roughly below 30 ms, all is fine. When the latency gets to 70ms and above, and definitely 130ms, the network layer suspends. For instance, if I set the latency (delay) to 70ms, then launching TeamViewer (or any other application that uses network connectivity) never happens or does not work. There is no timeout message, simply no response. I have to lower to latency back down to zero to see any response and have the box start working. What is the problem and how would I go about fixing it? It seems to me some sort of setting in Linux causes one of the CentOS networking drivers to sit in an infinite loop or something. eth0 is the connection to the internet, all settings are default eth2 is the 10 Gbit fiber optic connection to the other computer with the MTU set to 9600 with all other parameters at default values.

    Read the article

  • Log Shipping breaking daily on SQL Server 2005

    - by IT2
    I am facing a somewhat serious problem with Log Shipping on SQL Server 2005 and I am having trouble to correct it, so I will try some help from SF's experts. I have a Windows 2003 Server (PROD) that ships transactional log backups to another two servers: STAND1: Windows 2003 Server with SQL Server 2005. STAND2: Windows 2008 R2 Server with SQL Server 2005. The problem is that Log Shipping to STAND2 is breaking for ~ 90 minutes some times of the day and returning back without intervention. The breaking occur at times when the backup file is larger (after reindexing, etc). I can see the message below logged on the COPY job: *** Error: The specified network name is no longer available The copy agent was breaking dozens times a day only to STAND2 server, and after the changes below "only" breaks ~ two times a day: The frequency of the backup job was changed from 5 minutes to 10 minutes. Instead of backing up the 4 databases to the same folder, the log backups are now saved on separated folders for each database. The backup job doesn't run 24hs now, and only for 14 hours a day, when people are working on the database. I configured the SQL Server instances on the three servers to limit the memory, leaving more memory to the OS. Now I don't know what to do. Any help will be much appreciated! Thanks!

    Read the article

  • Windows and file system abstraction - how much does it matter where something comes from?

    - by deceze
    I have come across the following phenomenon and would like to know how leaky Windows' file system abstraction is or if there's something else involved. I partitioned the hard disk of my MacBook Pro and installed Windows 7 (64 bit). The Bootcamp driver package includes file system drivers (right term?) that enable Windows to access the Mac OS HFS+ partition. AFAIK it's a read-only access, but it works. Now, I have some disk images of stuff I usually install, so I grabbed a copy of Daemon Tools to mount them. When I mount an image saved on the HFS+ partition, about two out of three installers on these disks (usually InstallShield) crash with all sorts of weird errors. Most are just gibberish that lead to all sorts of non-solutions on Google, one was "This application is not the right type for your computer, check if you need 32 or 64 bit versions." When moving the image files to another Windows 7 computer on the network and mounting them from the network share, they work fine. My question now is, why do applications behave differently depending on whether the read-only image file, which should be abstracted away through the read-only virtual Daemon Tools drive, is located on a read-only HFS+ partition or on a Windows network share? And I'll just roll this into the question as well since I was wondering: Does the file system of a network share matter? Does the client system need to understand the file system of the share host or is that abstracted away in SMB?

    Read the article

  • Exclude list of specific files in wget

    - by nanker
    I am trying to download a lot of pages from a website on dial-up and it can be brutally slow. I have almost got the perfect wget command, but because I'm downloading pages from the same site wget wastes times downloading the same standard images for each page. If I know the name of the default page images, is there any way to have wget ignore and thus avoid downloading those for each and every page? Here is an example of one of the wget commands that my shell script generates into another shell script to download all of the pages: mkdir candy-canes-on-the-flannel-board-in-preschool cd candy-canes-on-the-flannel-board-in-preschool wget -p -nd -A jpg,html -k http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ wget -c --random-wait --timeout=30 --user-agent="Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.3) Gecko/2008092416 Firefox/3.0.3" http://www.teachpreschool.org/2011/12/candy-canes-on-the-flannel-board-in-preschool/ -O "candy-canes-on-the-flannel-board-in-preschool" rm Baby-and-Toddler.jpg Childrens-Books.jpg Creative-Art.jpg Felt-Fun.jpg Happy_Rainbow-e1338766526528.jpg index.html Language-and-Literacy.jpg Light-table-Button.jpg Math.jpg Outdoor-Play.jpg outer-jacket1-300x153.jpg preschoolspot-button-small.jpg robots.txt Science-and-Nature.jpg Signature-2.jpg Story-Telling.jpg Tags-on-Preschool.jpg Teaching-Two-and-Three-Year-olds.jpg cd ../ Now I realize the script is not likely as savvy as it could be but it is doing what I need at the moment except that you can see from the rm command that I would just like to prevent wget from downloading the files in the first place if possible. I almost forgot to mention, there are two wget commands and that is because the first one downloads the page as index.html and for some reason it does not open in my browser, however, when I open it and look at it in vim all of the page's content is there, so I am not sure why it does not open. But if I just issue the second wget command as it is then that page, same file really with an alternate name, opens up fine. Something that if I could fix would also help to streamline the process.

    Read the article

< Previous Page | 356 357 358 359 360 361 362 363 364 365 366 367  | Next Page >