Search Results

Search found 10445 results on 418 pages for 'basic'.

Page 292/418 | < Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >

  • Is there a small business router that shows bandwidth usage graphs in the admin panel?

    - by Robert Drake
    I support a large number of public libraries that are having their networks upgraded in response to a grant application. These libraries are generally home to between 6-15 computers and have little or no tech services either onsite or contracted remotely. In order to justify current and future purchases, a number of the libraries have requested routers that can provide bandwidth usage graphs that they can show to their managing boards. Is there a small business router that displays traffic graphs in the router administration web interface? The router needs to suppport DHCP and basic firewalling. No other features are required. Further, the reports just need to show overall trends. It is not necessary to show traffic by IP, by protocol/application, or by time of day. They just need an overall week to week, month to month, trend line. I'm familiar with MRTG/PRTG/tools that collect SNMP data from the router, but the libraries don't have the expertise for the configuration. I've considered installing the tomato firmware on some cheap home/home office routers, but if there's a commercial product that can be purchased that would be significantly simpler. Also the library boards would be much more likely to approve the purchase of a commercial product over a 'hacked' one. Any assistance would be appreciated.

    Read the article

  • trying to allow domain admins access in apache

    - by sharif
    I am trying to authenticate domain admins through apache and it is not working. Error i get is as follows [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(1432): [client 172.16.0.85] kerb_authenticate_user entered with user (NULL) and auth_type Kerberos [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(915): [client 172.16.0.85] Using HTTP/[email protected] as server principal for password verification [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(655): [client 172.16.0.85] Trying to get TGT for user [email protected] [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(569): [client 172.16.0.85] Trying to verify authenticity of KDC using principal HTTP/[email protected] [Mon Sep 24 14:54:45 2012] [debug] src/mod_auth_kerb.c(994): [client 172.16.0.85] kerb_authenticate_user_krb5pwd ret=0 [email protected] authtype=Basic [Mon Sep 24 14:54:45 2012] [debug] mod_authnz_ldap.c(561): [client 172.16.0.85] ldap authorize: Creating LDAP req structure [Mon Sep 24 14:54:45 2012] [debug] mod_authnz_ldap.c(573): [client 172.16.0.85] auth_ldap authorise: User DN not found, LDAP: ldap_simple_bind_s() failed Below is what I have in my httpd file Alias /compass "/data/intranet/html/compass" <Directory "/data/intranet/html/compass"> AuthType Kerberos AuthName KerberosLogin KrbServiceName HTTP/intranet.xxx.com KrbMethodNegotiate On KrbMethodK5Passwd On KrbAuthRealms xxx.COM Krb5KeyTab /etc/httpd/conf/intranet.keytab # require valid-user # Options Indexes MultiViews FollowSymLinks # AllowOverride All # Order allow,deny # Allow from all # SetOutputFilter DEFLATE # taken from http://blogs.freebsdish.org/tmclaugh/2010/07/15/mod_auth_kerb-ad-and-ldap-authorization/ # download extra module and install # Strip the kerberos realm from the principle. # MapUsernameRule (.*)@(.*) "$1" AuthLDAPURL "ldap://echo.uk.xxx.com akhutan.usa.xxx.com/dc=xxx,dc=com?sAMAccountName" AuthLDAPBindDN cn=Administrator,ou=Users,dc=xxx,dc=com AuthLDAPBindPassword *** Require ldap-group cn=Domain Admins,ou=Users,dc=xxx,dc=com </Directory> I have followed this guide. I have download and install the tarball. when I try to uncomment MapUsernameRule i get failed error when restarting apache Reloading httpd: not reloading due to configuration syntax error I am using centos 5 64bit. I have added the following line but i still get syntax error LoadModule mod_map_user modules/mod_map_user.so

    Read the article

  • restoring a failed SBS 2000 box...

    - by Brad Pears
    Hi there, I read a post where you had mentioned you have had a PERC card blow up on you in an SBS box... I've got a similar situation where one of my RAID drives failed and then the power supply failed before I could replace the drive... I then replaced the power supply and the failed drive and reconfigured the RAID array. I had a recent full backup of the my Win2k SBS's C: drive stored on my SYmantec backup exec server so I installed win2K server on the c: partition and then once I had that up and running, installed the backup exec agent so as to do a restore of the entire c drive including system state. THis all worked just fine, until I had to reboot. I received an "incorrect drive configuration" error and then it hangs. I figure that likely makes sense becasue I think my RAID array is configured slightly different now in that the partitions may be sizeded ever so slightly differently now than they were before I think... Is there a way I can just restore from my backup BUT maybe exclude some of the registry and hidden boot files it wants to restore so that it is booting with the current configuration now active on that machine - not the pre blow up configuration files? I also read a post that indicated you might have to install the exact same service pack etc... etc.. before attemting a restore but that does not make sense to me being as the entire c drive contents are going to be overwritten by the restore anyway? THe basic OS install is just to be able to get the backup exec agent installed . I can;t understand why one would need to install the exact same SP level. CAn you shed some light on what I might be able to do to get this thing up and running? Thanks,Brad

    Read the article

  • AuthBasicProvider: failover not working when the first LDAP is down?

    - by quanta
    I've been trying to setup redundant LDAP servers with Apache 2.2.3. /etc/httpd/conf.d/authn_alias.conf <AuthnProviderAlias ldap master> AuthLDAPURL ldap://192.168.5.148:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> <AuthnProviderAlias ldap slave> AuthLDAPURL ldap://192.168.5.199:389/dc=domain,dc=vn?cn AuthLDAPBindDN cn=anonymous,ou=it,dc=domain,dc=vn AuthLDAPBindPassword pa$$w0rd </AuthnProviderAlias> /etc/httpd/conf.d/authz_ldap.conf # # mod_authz_ldap can be used to implement access control and # authenticate users against an LDAP database. # LoadModule authz_ldap_module modules/mod_authz_ldap.so <IfModule mod_authz_ldap.c> <Location /> AuthBasicProvider master slave AuthzLDAPAuthoritative Off AuthType Basic AuthName "Authorization required" AuthzLDAPMemberKey member AuthUserFile /home/setup/svn/auth-conf AuthzLDAPSetGroupAuth user require valid-user AuthzLDAPLogLevel error </Location> </IfModule> If I understand correctly, mod_authz_ldap will try to search users in the second LDAP if the first server is down or OpenLDAP on it is not running. But in practice, it does not happen. Tested by stopping LDAP on the master, I get the "500 Internal Server Error" when accessing to the Subversion repository. The error_log shows: [11061] auth_ldap authenticate: user quanta authentication failed; URI / [LDAP: ldap_simple_bind_s() failed][Can't contact LDAP server] Did I misunderstand?

    Read the article

  • 16TB Volumes and SNMP On Windows

    - by John K
    As volumes larger than 16TB became more common, it was recognized that the 32 bit value used to report disk size and usage within the standard "HOST-RESOURCES" MIB in SNMP was not large enough to report the proper disk size. Net-SNMP seems to have addressed this issue by simply manipulating the value of "AllocationUnits" to maintain a 32 bit value for disk utilization (since total disk size/usage is equal to the 32 bit space value times the allocation unit), to allow for the calculation of a volume larger than 8/16TB. Presuming you don't have any reporting interest in the allocation unit, this seems like a fine solution. https://bugzilla.redhat.com/show_bug.cgi?id=654384 Window's built in SNMP service, however, seems to continue to suffer from this error, simply reporting the modulo of the used/assigned disk space, resulting in inaccurate disk size reporting. Is there a way to enable Windows to correctly report disk usage for volumes over 16TB? We attempted to simply install Net-SNMP 5.5 x64 and disable Windows SNMP service entirely, however this unfortunately did not fix our issue. I've seen people in the Cacti community mention simply scripting out a solution. Unfortunately, we're using Observium for quick and basic systems monitoring. If the issue can't be correct on the Window's side, can Observium be made to report custom MIBs?

    Read the article

  • CentOS security for lazy admins

    - by Robby75
    I'm running CentOS 5.5 (basic LAMP with Parallels Power Panel and Plesk) and have thus far neglected security (because it's not my full-time job, there is always something more important on my todo-list). My server does not contain any secret data and also no lives depend on it - Basically what I want is to make sure it does not become part of a botnet, that is "good enough" security in my case. Anyway, I don't want to become a full-time paranoid admin (like constantly watching and patching everything because of some obscure problem), I also don't care about most security problems like DOS attacks or problems that only exist when using some arcane settings. I'm in search of a "happy medium", for example a list of known important problems in the default installation of CentOS 5.5 and/or a list of security problems that have actually been exploited - not the typical endless list of buffer overflows that "maybe" a problem in some special case. The problem that I have with the usually recommended approaches (joining mailing lists, etc.) is that the really important problems (something where an exploit exists, that is exploitable in a common setup and where the attacker can do something really useful - i.e. not a DOS) are completely and utterly swamped by millions of tiny security alerts that surely are important for high-security servers, but not for me. Thanks for all suggestions!

    Read the article

  • Pitfalls to using Gluster as a home/profile directory server?

    - by Bart Silverstrim
    I was asking recently about options for divvying up access to file servers, as we have a NAS solution that gets fairly bogged down when our users (with giant profiles, especially) all log in nearly simultaneously. I ran across Gluster and it looks like it can cluster different physical storage media into a single virtual volume and share it out like a virtual NAS from the client perspective and it support CIFS. My question is whether something like this would be feasible to use for home and profile directories in an active directory environment. I was worried about ACL's, primarily, as I didn't think CIFS was fine-grained enough to support NTFS permissions and it didn't look like Gluster exports those permission levels, just the base permissions for basic file sharing. I got the impression that using Gluster would allow for data to be redundant across multiple servers and would speed up access to the files under heavy load, while allowing us to dynamically boost storage capacity by just adding another server and telling Gluster's master node to add that server. Maybe I'm wrong with my understanding of it though. Anyone else use it or care to share how feasible this is?

    Read the article

  • Dedicated server with a lot of storage and good support - and cost-effective

    - by Martin Burger
    Hello, I am from Germany and looking for a dedicated server located in the US with a lot of storage: 750 - 1500 GB. CPU speed and amount of memory are secondary, the server will host large amounts of media files via http and ftp - the basic task is to help people exchange media files. In Germany, there are some good offers, like "Root Server EQ6" at www.hetzner.de. For example, that company provides support of high quality, and their plans are very cost-effective. The plan mentioned above costs about $90 per month and provides two 1500 GB SATA-II HDDs (Software-RAID 1). In the US, I found (amongst others) Go Daddy and rackspace. Go Daddy offers some "Storage Monster" plans that include 2 x 1,000 GB hard drives for about $180 per month - already twice as much as Hetzner above. However, I found some blog and forum entries that complain about the support provided by Go Daddy. Rackspace seems to provide decent support, but they are very "upscale". Their dedicated servers are customizable and start at $419 - thus, about 4.5 times as much as Hetzner. Can anybody recommend a solution / plan that is comparable to the one by Hetzner? Or are prices for dedicated servers in general much higher than in Germany? Regards, Martin

    Read the article

  • Windows XP Language, explorer.exe

    - by nmuntz
    Hi, I was given by my company a laptop with Windows XP Professional in Spanish. I would like to translate it to English, since I really DISLIKE to use localized versions of programs. I have read about Windows MUI packs, however you MUST have Windows XP Pro in English in order to translate it to other language, you can't translate it TO English from other language. Since reinstalling the OS using a Win XP CD in english is not an option (don't have the license nor the CD, and don't have domain privileges to rejoin my computer to the domain), I was wondering what are the essential files that contain localized strings of text. I was doing some research, and apparently explorer.exe has many of the Windows Error Messages and other strings. Will replacing my original explorer.exe with one from Windows XP in English be enough (and work) for having a "basic" english version of windows? Im mainly interested in having error messages, start menu, and the control panel in english. Also, does it HAVE to be the same version as the Service Pack im running? Besides explorer.exe are there any other essential files that i should try to get and replace? Do you see any "dangers" in replacing this files with english version ones? Thanks in advance for your help.

    Read the article

  • Excel file growing huge (>150 MB)

    - by Josh
    There is one particular Excel file that is used by a number of employees at my company. It is edited from both Excel 2003 and 2007, with the "Sharing" feature turned on to allow multiple writers at once. The file has a decent amount of data on several sheets with some basic formatting, and used to be about 6MB, which seems reasonable for its content. But after a few weeks of editing, the file grew to 10, then 20 MB, and eventually skyrocketed to more than 150 MB, even though it still has about the same amount of data as before. It now takes 5-10 minutes to open it, and that much time again to save it. The first time this happened, I copied the content of each sheet into a new, blank workbook, and saved the new workbook; this brought it back down to about 6MB. Now, it has blown up again. The workbook uses the "Data Validation" feature to limit the values in certain columns to the contents of a few named ranges. Copying all the data into a new workbook means re-setting up all the data validation, which is a pain and not something that we want to do every month. As a troubleshooting step, I tried saving the file in "XML Spreadsheet 2003" format, hoping to get some insight into what was being stored. Sure enough, the file was almost a gig, and almost all of the 10 million lines look like this: <NamedCell ss:Name="Z_21D5114F_E50C_46AC_AA4F_C3FF540C717F_.wvu.FilterData"/> <NamedCell ss:Name="Z_1EE2BA5E_3011_4F9A_8ACD_E58835250FC4_.wvu.FilterData"/> <NamedCell ss:Name="Z_1E3BDCEA_6A72_4ECC_BF4F_7B03CC66181E_.wvu.FilterData"/> I've seen a few VBScripts online to manage and enumerate named cells that are hidden in Excel's built-in interface, though I wonder how they'd handle my 10 million named cells. What I really need, though, is an understanding of why this keeps happening. What actions in excel could be causing this?

    Read the article

  • Virtual (ESXi4) Win 2k8 R2 server hangs when adding role(s)

    - by Holocryptic
    I'm trying to provision a 2k8r2 Enterprise server in ESXi4. The OS installation goes fine, VMware tools, adding to domain, updates. All the basic stuff before you start adding Roles and Features. I've had this happen on two attempts already, and I'm not sure where the problem might be. I don't think it's hardware, because I have another 2k8r2 Standard server that's running fine. The only real difference is the install media. The server that's working was installed using a trial ISO and license. The one I'm having problems with is a full MAK installation. When I go to add a Role (the last case was Application Server) it gets all the way to "collecting installation results" before it hangs. CPU utilization in the vSphere client shows little spikes of activity with flatlines inbetween, but the whole console is locked up. The only way to release it is to power off and bring it back up. When you go to look at the added roles after bringing it back up, it shows that it is installed, but I don't trust that something didn't get wedged in all of that. The first install I did was with Thin Disk provisioning. The second attempt was with regular disk provisioning. In both cases 4GB of RAM, 2 vCPUs. VMware host is a HP Proliant DL380 G6, RAID-1 OS, RAID-5 data volume. 12 GB RAM. Has anyone else had this problem, or know where I should start poking around?

    Read the article

  • Cisco QoS Guidence

    - by Kyle Brandt
    I have a 10M connection to the internet that is hooked into a 100M port. I am getting started with QoS, and am hopping for a little guidance on setting it up on a Cisco 3825 router. Right now I am going forward with the idea that I have to implement it on my router, and the provider can't provide QoS for me. How I envision it working is that the QoS will drop or queue packets on my router and that will help prevent a situation where the provider has to start dropping a lot of packets. Right now all I am tasked with is making sure that one of the 3 LANs gets a certain slice (say 3M for Gig Lan1) of the 10M internet connection (But ideally this will be more flexible in the Future). 10M Internet on 100M port on HWIC-4ESW +-----------------------+ | | Gig Lan1 | Cisco 3825 | Lan3 on HWIC-4ESW | | +-----------------------+ Gig Lan2 I need to learn more about QoS, but having a target technology and maybe example configuration will help me wrap my head around the reading I am doing a little more. Which Cisco QoS Technology do you recommend for this particular situation? Have a basic sample config of how this might work? Right now the 10M line is not congested, so this more to have something in place in case it starts to become mildly congested in the future.

    Read the article

  • Windows Server 2003 DC hangs after network drivers update

    - by tcv
    Earlier today, we attempted to update the Broadcom BCM5716C network drivers on a Windows Server 2003. (Dell PowerEdge T310, FWIW). Since then we have not been able to boot the server in any normal mode. Safe Mode works. Safe Mode with Networking and regular bootups hang at "Applying Network Settings." I haven't tried Last Known Good Configuration nor have I tried Directory Services Restore Mode. I should also mention that the longest I've allowed "Applying Network Settings" was perhaps 30 minutes. I spoke to Dell since the server is under a basic warranty. They sent me the original Broadcom drivers. The trouble seems to be, however, that since I can only boot in Safe Mode, I can't install the application package as given. In safe mode, I receive the error: "The system administrator has set policies to prohibit this installation." I can install the drivers independently, but that doesn't allow the NICs to work. The most I've been able to get are Code 10 errors on each NIC. I plan to get back to the site tomorrow to attempt installation of a different NIC. I'm wondering what else I can try.

    Read the article

  • Sharepoint Workflow "Failed on Start" only when powershell import script is called from task scheduler

    - by Matt Keller
    I created a simple PowerShell script that takes an XML file in a local directory on our sharepoint server and imports it into a specific SharePoint form library. (Content management enabled library if that makes any difference) This script works flawlessly if i run it from the PowerShell command line manually. I call it like such: ".\script_name.ps1". It completes without error and the item is imported into the form library successfully. The workflow begins on the item and everything is happy dandy. However, i run into issues when i setup a scheduled task using Windows Server 2008 R2's task manager. The task runs the script without error and it does actually import the XML into the form library. I looks perfectly normal just as if i had run the script manually. However, after about 10 or 20 minutes the workflow status for that item changes from "In progress" to "Failed on Start (Retrying)". The scheduled task in question is a basic task and has only one action. (Start a program) The "program/script" box is set to "C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" and the "Add arguments" box is set to the path of the actual ps1 script. (C:\scripts\sharepoint_import.ps1) I've tried running the task as various users. I've also tried with and without the "Run with highest privileges" check box. Nothing seems to work. For reference, here is the script i am using to import items into the form library.

    Read the article

  • I need help choosing between two configurations of the Dell Studio 14

    - by Adnan
    There are two configurations of the Dell Studio 14 (1458) which I'm looking at: Config 1: Core i7-720QM @ 1.6 GHz; ATI Mobility Radeon HD 5450 1GB; 4gb DDR3 RAM @ 1066 MHz; 500 GB SATA HDD @ 7200 RPM; Price: $999 Config 2: Core i5-430M; ATI Mobility Radeon HD 4530 512MB; 4GB DDR3 RAM @ 1066 MHz; 500 GV SATA HDD @ 7200 RPM; Price: $874 What I want to know is, would config 1 still be able to do decent gaming (maybe some Starcraft II), and is there a great performance difference between the i5 and i7 processors? Is the $130 extra worth it for the i7 and better graphics card? I do more than just basic computing. I plan on getting into web design (specifically using Photoshop and Dreamweaver), and I wish to do gaming. I know Conifg 1 is the better value, but I want to be sure that the $130 more is truly worth it. I dont have too much money and want to spend wisely as possible, yet I am a computer geek and plan on doing a lot more than the average user.

    Read the article

  • I need an admin toolset for Windows 2003 and 2008

    - by eugeneK
    i know this is way too general question but anyway. I need few tools, will write down my tasks as sysadmin and if you have any to automate my job i would be glad to hear. I don't mind paying for software needed unless it is way too expensive. First of i backup all files on server at local/office storage. I 7zip all SQL backup files and then move them over network to centralized location and then FTP them from office PC which has no FTP server installed and cannot have one. Backups happen at 4AM at the morning thus i need to set time for compressing and afterward FTPing. Then i FTP all IIS web application as differentiation backup, same goes for VOD movies. Second tool i need is system monitor which will monitor all servers from themselves and from external location for CPU/Memory/Hard disk and other basic failures. This tool should able to execute Website address with parameters which will send me an email with all report on failure. Third tool i need is a way to get all Event Logs from 10 Windows based servers without accessing each any of them manually. If you know any solution, thanks in advance.

    Read the article

  • PostgreSQL failover cluster on Windows Server

    - by user36997
    We are looking for advice on how to setup a basic failover cluster for our application: We will be using 4 machines running Microsoft Windows Server (most probably 2003). All four will always run our application, which is essentially a web service. Load balancing is "outsourced" - somebody else handles the distribution of the web requests among the servers. Only one of the servers will be running the PostgreSQL server actively at any given time. Another server (of the four) also has the DB installed, but is on standby/passive. The DB data is stored on shared storage. No copying data between servers. Reads are done very frequently by many end-users, and in rather small chunks of data. Writes are done much less frequently, by less users, and in very large bulks of data. Now, how can one configure Microsoft Cluster Service to keep only one instance of the DB server and 4 instances (1 per server) of our application at all times? And does PostgreSQL integrate neatly with MSCS at all? Update: Instead of keeping the data on shared storage, I also consider using log shipping to replicate data on a couple of DB servers. There are two issues with this option: Log shipping only makes sure that I have a second server that gets all of the data and is ready to take over. How do I implement the actual failure detection and failover switch? Switching back: Suppose the master fails and the system automatically fails over to the slave, and later the master comes back online. I understand that with WAL shipping this will require to reconfigure the log shipping once again, and that switching back is far from seamless. Is that so?

    Read the article

  • Setup Entourage for Exchange via HTTP communication

    - by Johandk
    Our ISP set up a hosted exchange server for all our mail. I've setup all our Outlook users with no problems. We have two people using Mac OSX Leopard and Entourage. Entourage has the option of adding an Exchange account, but I have no idea how to tell it to connect to exchange via HTTP. Heres an excerpt from the client setup docs the hosting company sent me for Outlook: 1 .Go to control panel 2. Select ‘Mail’ 3. Select ‘Email accounts’ Under the E-mail tab select ‘New’ Select ‘Manually configure server settings......’ - click next Select ‘Microsoft Exchange’ – click next Complete details as below with Microsoft Exchange Server as: [server address] Do not select ‘Check Name’. Instead select ‘More Settings’. Go to the Connection tab, and select the bottom option ‘Connect to Microsoft Exchange using HTTP’. And then select the ‘Exchange Proxy Settings’ button. Enter Proxy server for Exchange Check Only connect to proxy servers that have this principal name in their certificate, Enter msstd:[servername] Proxy Authentication - select Basic Authentication Select OK, and again, so that you return to the main screen. Now select ‘Check Name’. Enter Username and Password: The username should now be the full name and underlined. If so select next, and then finish. Next time you open Outlook, enter username and password Any help GREATLY appreciated.

    Read the article

  • Resize a RAID 1 volume on OSX Snow Leopard - how? (Note: software raid)

    - by Emmel
    I've scoured the Internet in search of an answer to this question, and as usual with OSX-related topics, I often don't find any deep-dive technical explanations sufficient enough to feel confident doing dangerous things. Here is my question: I have a Mac Pro, running OSX 10.6.2. I have, as my main root/boot disk, a RAID 1 volume called "Mirror1". Mirror1 is comprised of two 1 TB disks. Mirror1, however, is fixed at 640 GB. That's because, I originally took a 640GB disk, bought a terabyte disk, mirrored it (using diskutil appleraid enable...), when it synced I removed the 640GB and replaced it with a second 1 TB disk, and synced again. Voila! A single 640 GB replaced by two 1 TB disks in a mirror.. Actually, no. There's still something missing from the equation: Mirror1 needs to be expanded from 640GB to 1 TB to match the partition sizes on each of those disks. How do I do this? Perhaps the diskutil output will help: -> diskutil list /dev/disk0 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk0 1: EFI 209.7 MB disk0s1 2: Apple_RAID 999.9 GB disk0s2 3: Apple_Boot Boot OSX 134.2 MB disk0s3 /dev/disk1 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *1.0 TB disk1 1: EFI 209.7 MB disk1s1 2: Apple_RAID 999.9 GB disk1s2 3: Apple_Boot Boot OSX 134.2 MB disk1s3 /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *640.1 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_HFS Mac Disk 2 536.7 GB disk2s2 3: Microsoft Basic Data BOOTCAMP 103.1 GB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: Apple_HFS Mirror1 *639.8 GB disk3 -> diskutil appleraid list AppleRAID sets (1 found) =============================================================================== Name: Macintosh HD Unique ID: 1953F864-B474-4EB6-8E69-41834EBD0247 Type: Mirror Status: Online Size: 639.8 GB (639791038464 Bytes) Rebuild: manual Device Node: disk3 ------------------------------------------------------------------------------- # Device Node UUID Status ------------------------------------------------------------------------------- 0 disk1s2 25109BAE-5697-40EA-B612-0217851444F7 Online 1 disk0s2 11B83AB0-8148-4DB6-8761-DEF08C855F8D Online =============================================================================== Thanks in advance.

    Read the article

  • .htaccess authorization requiring username/password for every resource

    - by webworm
    I am using Apache2 on Ubuntu and I have having some "weird" user authorization issues. I am using .htaccess to control access to my directories. I have many users and have grouped them into user groups which are defined in a "group" file. I then use .htaccess within each directory to define which users have access to the directory and which do not. Here is an example .htaccess file. AuthUserFile /var/local/.htpasswd AuthGroupFile /var/local/groups AuthName "Username and Password Required" AuthType Basic require group design admin Everything is working with one exception. I added a new user to one of my groups and though they can gain access to the directory they are prompted for a username and password for every resource (i.e. image, CSS). After a while I can just keep selecting "cancel" and I will get a page with just html with no images or CSS. I would think the browser would just cache the username/password. It seems to be working well for other users. Any thoughts?

    Read the article

  • .htaccess authorization requiring username/password for every resource

    - by webworm
    I am using Apache2 on Ubuntu and I have having some "weird" user authorization issues. I am using .htaccess to control access to my directories. I have many users and have grouped them into user groups which are defined in a "group" file. I then use .htaccess within each directory to define which users have access to the directory and which do not. Here is an example .htaccess file. AuthUserFile /var/local/.htpasswd AuthGroupFile /var/local/groups AuthName "Username and Password Required" AuthType Basic require group design admin Everything is working with one exception. I added a new user to one of my groups and though they can gain access to the directory they are prompted for a username and password for every resource (i.e. image, CSS). After a while I can just keep selecting "cancel" and I will get a page with just html with no images or CSS. I would think the browser would just cache the username/password. It seems to be working well for other users. Any thoughts?

    Read the article

  • How do I properly configure a ZipInstaller .zic file?

    - by Iszi Rory or Isznti
    As of version 1.20, ZipInstaller is supposed to support the use of a configuration file to customize its installation options. Generally, all the options I want to use are available through the dialog so I really haven't bothered with the configuration file until now. The problem now is that certain tools, such as PsTools from Sysinternals, do not properly show their Product Name to ZipInstaller. ZipInstaller's dialog will let you customize the Start Menu folder and Program Files folder, but that still doesn't change the Product Name that it sees for the software. So, instead of having "PsTools" in my Add/Remove Programs, I get "Sysinternals Software". For some things, the situation is even more confusing. For example, the NIST SP 800-53 Reference Database Application gets installed as "FileMaker Pro Runtime". To rectify this, I've tried to use the aforementioned .zic configuration file. As I understand it, it's a basic INI file you create and put in the root of the ZIP file. ZipInstaller is supposed to read that file, and adjust its parameters accordingly. Mine looks like this: [install] ProductName=NIST_SP_800-53 ProductVersion=1.4.1 CompanyName=NIST Description=NIST_SP_800-53 InstallFolder=%zi.ProgramFiles%\%zi.ProductName% StartMenuFolder=%zi.CompanyName%\%zi.ProductName% I've named it `~zipinst~.zic and placed it in the root of the ZIP file, but when I run ZipInstaller it doesn't seem to recognize any of the information I've given it in the .zic file. What might I be doing wrong here?

    Read the article

  • Liferay - Verify each node in a cluster

    - by Schrute
    In this example, I have two clustered instances of Liferay using bundled Tomcat running, using cluster link and shared documents. Let's say the name of the public community is fubar and friendy URL used is fubar.lipsum.com. Let's say the ports listening on each server is 8080. If I go to both server1:8080 or server2:8080 I will get the default page for Liferay. How can I test fubar.lipsum.com on each node by using the backend server, so I can verify each server? If I test it, it just goes to the load balancer, I wish there was a way to append to the backend connection to bring it up. I can add the friendly URL to my local machines hosts file and this seems to kinda work, but then once something is called in the application, it tries to go out again from the backend server and then uses SSL and then we have problems. I think I may be able to do port forwarding, but this seems like a basic thing we should be able to do and what I've found so far in the admin docs has not helped. Using the option to print the server name in the page details isn't an option either.

    Read the article

  • Authenticated User Impersonation in Classic ASP under IIS7

    - by user52663
    I've recently moved one of our servers from Server 2003 and IIS6 to Server 2008 R2 and IIS7 (technically IIS7.5 I suppose). In doing so I am transitioning a small account management tool written in classic ASP and have run into a problem with user impersonation. Extensive searching hasn't been much help so far. Under IIS6, the site was configured to impersonate the logged-in user. Thus, if a domain admin logged in, he was able to run commands to create user directories, adjust permissions, etc. Using Procmon you can see the processes executing as that user. This worked fine. However, with the same code under IIS7, I am unable to get this behavior. I have enabled Basic Authentication, disabled Anonymous Auth, enabled impersonation and have changed the app pool to classic instead of integrated pipelining. Everything seems to be configured correctly, however, all the processes launched by the classic ASP site continue to run as the default AppPool identity and not the logged-in user. If it matters, programs are being launched with code such as: set Wsh = Server.CreateObject("WScript.Shell") Wsh.Run("cmd.exe /C mkdir D:\users\foo") Monitoring via Procmon shows cmd.exe being run as either "Classic .NET AppPool" or "DefaultAppPool" depending on the pipeline mode. Any suggestions on how to get the classic ASP site to impersonate and execute as the authenticated user would be great. Thanks!

    Read the article

  • Processing-time billing in Amazon EC2

    - by Rafael Almeida
    Hi all! I think my question is fairly basic, but I would like a clarification: in the Pricing part of AWS we can see that Amazon charges people around .10 by the 'instance computing hour'. I've seen in a blog post somewhere (can't remember where exactly, and even if I did I think it was in Portuguese anyway) that this way your minimum monthly payment would be $72 (= .10 $s/hour x 24 hours x 30 days). Is this correct? (I don't think it is!) In my understanding is that this 'virtual computing time' is only used when your machine is actually doing something (serving pages, serving the admin via ssh, whatever), so real billable usage would be less than 720 hours/month in most webserver scenarios. Is my view correct? If it is, then it leads me to another question: is it economically interesting to buy access to one of these instances for testing? I mean, would I have the 'freedom' to 'forget' about it for a month and receive a very-close-to-zero (as in, a few cents) bill? Do you do it/know of anybody who does? Any thoughts on the matter (as in, "yes, it's a good idea", or "yes, but there's this 'gotcha': ...", or "no, nobody does it because of...")? PS: sorry for the loong question text. I highlighted the main questions for easy view. Also, I'm not sure if this question is actually more than one and if it's desirable for the community, so, sorry if it is too! Thanks in advance!

    Read the article

< Previous Page | 288 289 290 291 292 293 294 295 296 297 298 299  | Next Page >